WO2014114072A1 - 虚拟化平台下i/o通道的调整方法和调整装置 - Google Patents
虚拟化平台下i/o通道的调整方法和调整装置 Download PDFInfo
- Publication number
- WO2014114072A1 WO2014114072A1 PCT/CN2013/080837 CN2013080837W WO2014114072A1 WO 2014114072 A1 WO2014114072 A1 WO 2014114072A1 CN 2013080837 W CN2013080837 W CN 2013080837W WO 2014114072 A1 WO2014114072 A1 WO 2014114072A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vms
- end devices
- throughput
- host
- average
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 230000033228 biological regulation Effects 0.000 title abstract 5
- 238000012545 processing Methods 0.000 claims abstract description 183
- 230000005540 biological transmission Effects 0.000 claims abstract description 34
- 230000007423 decrease Effects 0.000 claims abstract description 14
- 230000008569 process Effects 0.000 claims description 48
- 230000003247 decreasing effect Effects 0.000 claims description 26
- 230000009467 reduction Effects 0.000 claims description 23
- 238000004891 communication Methods 0.000 claims description 5
- 239000002699 waste material Substances 0.000 abstract description 2
- 230000001105 regulatory effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45579—I/O management, e.g. providing access to device drivers or storage
Definitions
- the present invention relates to the field of virtualization technologies, and in particular, to an adjustment method and an adjustment apparatus for an I/O channel under a virtualization platform. Background technique
- Virtualization is the abstraction and transformation of computer physical resources such as servers, networks, memory and storage.
- the new virtual part is not limited by the way of existing resources, geographical or physical configuration.
- the physical host HOST runs multiple virtual machines (Virtual Machines, referred to as VMs).
- VMs Virtual Machines
- HOST manages all physical hardware devices and resources, and virtualizes one exclusive device into multiple virtual devices for multiple user threads to use simultaneously.
- the devices that users can see are virtual devices, and the physical hardware devices are transparent to users.
- the VM does not directly access the hardware device.
- HOST provides the VM with a data path connecting the hardware devices, that is, an I/O channel.
- the channel includes a data channel between the Front Device of the VM and the Back Device of the VM, and a data channel between the back device of the VM and the Native Device of the HOST; wherein, the front end of the VM
- the device is the device seen in the virtual machine, which is actually the device that the HOST simulates for the VM.
- the backend device of the VM is the software emulation device that is connected to the front-end device of the VM in the HOST operating system; the local device of the HOST is the HOST device. Physical device.
- FIG. 1 illustrates a simple multi-I/O channel technology in a virtualization platform in the prior art.
- two virtual machines VM1 and VM2 are taken as an example.
- /O channel (takes two I/O channels in Figure 1 as an example).
- the data processing module is a bridge between the front-end device and the back-end device of the VM, used for data copying, data filtering, or other data processing services, including Multiple worker threads (take two worker threads as an example in Figure 1), the number of worker threads and the front-end and back-end devices of the VM
- the number of I/O channels is the same, and each I/O channel corresponds to one working thread.
- the back channel device of the VM and the bridge bridge and the bridge bridge and the local device Native Device are single channels, VM The backend device implements data transmission with the local device Native Device through the single channel.
- the inventor has found that the above prior art has at least the following technical problems:
- the number of I/O channels between the front-end device and the back-end device of the VM is determined when the VM is created, and the number of I/O channels in the entire life cycle of the VM is determined. Cannot be changed, so the channel resources occupied by the I/O channel between the front-end device and the back-end device of the VM cannot be changed.
- the I/O throughput between the front-end device and the back-end device of the VM changes, Adjusting the I/O channel resources, when the I/O throughput drops, the idle I/O channel resources cannot be released, which wastes the I/O channel resources.
- the I/O throughput increases, the I/O channel resources cannot be increased. The data transmission capability of the I/O channel cannot be improved and the system performance is degraded. Summary of the invention
- Embodiments of the present invention provide an adjustment method and an HOST adjustment apparatus for an I/O channel under a virtualization platform, so as to dynamically adjust allocation of I/O channel resources between front-end devices and back-end devices of multiple VMs, thereby improving system performance. .
- the present invention provides a method for adjusting an I/O channel under a virtualization platform, including: a host HOST statistics an average I/O throughput of a plurality of virtual machine VMs running on the HOST at a current time; When the average I/O throughput at the current moment is greater than the first threshold, the HOST adds a worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs, so that after the worker thread is added The average I/O throughput of the plurality of VMs is less than a first threshold; or, when the average I/O throughput at the current moment is less than a second threshold, the HOST is at a front end device of the plurality of VMs Reducing a worker thread for processing the VM with the backend device such that the average I/O throughput of the plurality of VMs after the worker thread is reduced is greater than a second threshold; wherein the first threshold is greater than a second threshold; the HOST adjust
- the HOST is increased between the front end device and the back end device of the multiple VMs.
- the HOST compares the increase in CPU utilization and the increase in I/O throughput brought by the worker threads for processing VMs between the front-end devices and the back-end devices of the plurality of VMs. If the increase in the I/O throughput is greater than the increase in CPU utilization, performing the HOST adds a step of processing a worker thread of the VM between the front end device and the back end device of the plurality of VMs.
- the HOST is reduced between the front end device and the back end device of the multiple VMs.
- the method further includes:
- the HOST judgment is reduced between the front end device and the back end device of the plurality of VMs for processing
- the HOST is separately adjusted according to the added or decreased working thread for processing the VM.
- the HOST When the number of worker threads for processing the VM after the increase or decrease is smaller than the number of VMs running on the HOST, the HOST respectively corresponds each of the worker threads to one queue and each of the front-end devices of each VM.
- the exclusive worker thread and the shared worker thread are included.
- the HOST is based on the increased or decreased work for processing the VM.
- a thread respectively adjusting a correspondence between a queue in the front end device of the plurality of VMs and a worker thread for processing the VM, and a queue in the back end device of the plurality of VMs and a worker thread for processing the VM
- the method further includes:
- the HOST adjusts a correspondence between a queue in a backend device of the plurality of VMs and a queue in a local device Native Device in the HOST, so as to be in a backend device of the plurality of VMs and the Native Device.
- a plurality of data transmission channels are formed.
- the present invention provides an apparatus for adjusting an I/O channel under a virtualization platform, HOST, including:
- a statistic module configured to count the average I/O throughput of the current time of the plurality of virtual machine VMs running on the host HOST;
- the processing module is connected to the statistic module, and is configured to be used at the current time of the statistics module
- a worker thread for processing the VM is added between the front end device and the back end device of the plurality of VMs, so that the plurality of VMs after the worker thread are added
- the average I/O throughput is less than the first threshold; or, when the average I/O throughput at the current time counted by the statistics module is less than the second threshold, the front end device and the back of the multiple VMs Reducing a working thread for processing the VM between the end devices, so that the average I/O throughput of the plurality of VMs after the working thread is reduced is greater than a second threshold; wherein the first threshold is greater than the first
- the first adjustment module is connected to the processing module, and is configured to adjust a queue in the front
- the adjusting apparatus further includes: a determining module, configured to: when an average I/O throughput of the current time counted by the statistics module is greater than a first threshold, Comparing the increase in CPU utilization and the increase in I/O throughput brought about by the worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs; a processing module, configured to: if the increase in the I/O throughput is greater than an increase in CPU utilization, add a worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs, so as to increase The average I/O throughput of the plurality of VMs after the worker thread is less than the first threshold.
- the adjusting apparatus further includes: a determining module, where an average I/O throughput is less than the current time at the current time of the statistics module Determining whether a reduction in CPU utilization caused by a worker thread for processing a VM between the front-end device and the back-end device of the plurality of VMs causes a failure to respond to throughput of the plurality of VMs
- the processing module is further configured to reduce a decrease in CPU utilization caused by a worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs without causing failure to respond to the The throughput of the VMs, reducing a worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs, such that the average I/O throughput of the plurality of VMs after the worker thread is reduced is greater than The second threshold.
- the first adjustment module is specifically configured to:
- each of the worker threads corresponds to one queue and each VM of each VM's front-end device. a queue in the backend device; or, if the number of worker threads for processing the VM after the increase or decrease is greater than or equal to the number of VMs running on the HOST, the exclusive worker thread corresponds to a VM front-end device and a queue in the end device, and a queue in the front end device and the back end device corresponding to the at least two VMs that share the worker thread are not corresponding to the exclusive worker thread, and the worker thread for processing the VM includes the exclusive worker thread And the shared worker thread.
- the adjusting apparatus further includes:
- a second adjustment module configured to adjust a correspondence between a queue in the backend device of the plurality of VMs and a queue in the local device Native Device in the HOST, so as to facilitate the backend device and the A plurality of data transmission channels are formed between the Native Devices.
- the present invention provides a host HOST, where the HOST includes: a local device Native Device, a front end device and a back end device of a plurality of virtual machine VMs running on the Native Device, And a data processing module between the front end device and the back end device of the plurality of VMs, wherein:
- the data processing module is used to:
- the data processing module is further configured to: add a work thread for processing the VM between the front end device and the back end device of the multiple VMs The increase in CPU utilization and the increase in I/O throughput brought about; if the increase in I/O throughput is greater than the increase in CPU utilization, the front-end devices and the back of the multiple VMs A worker thread for processing the VM is added between the end devices such that the average I/O throughput of the plurality of VMs after the worker thread is increased is less than the first threshold.
- the data processing module is further configured to: determine, between the front end device and the back end device of the plurality of VMs, reduce a work thread for processing the VM Whether the decrease in CPU utilization is caused to fail to respond to the throughput of the plurality of VMs; if the CPU utilization of the worker thread for processing the VM is reduced between the front-end device and the back-end device of the plurality of VMs The reduction in rate does not result in an inability to respond to the throughput of the plurality of VMs, reducing a worker thread for processing the VM between the front end device and the back end device of the plurality of VMs, such that after the worker thread is reduced
- the average I/O throughput of multiple VMs is greater than the second threshold.
- determining whether to increase or decrease a worker thread for processing a VM between a front end device and a back end device of the plurality of VMs according to an average I/O throughput of the plurality of VMs at the current time When the average I/O throughput of multiple VMs is greater than the first threshold at the current time, the working thread for processing the VM is increased, that is, the I/O channel resources are increased, and the data transmission capability of the I/O channel is improved; When the average I/O throughput of multiple VMs is less than the second threshold, the working threads for processing VMs are reduced, that is, the I/O channel resources are reduced, and the I/O channel resources are wasted.
- the HOST forms a plurality of data between the backend device of the plurality of VMs and the native device of the local device by adjusting the correspondence between the queues in the backend devices of the plurality of VMs and the queues in the local device Native Device in the host HOST.
- the transmission channel realizes multiple I/O channels between the front end device of the plurality of VMs and the local device of the local device in the HOST, and the plurality of VMs are improved.
- FIG. 1 is a structural diagram of a simple multi-I/O channel technology under a virtualization platform in the prior art
- FIG. 2 is a flowchart of a method for adjusting an I/O channel under a virtualization platform according to an embodiment of the present invention
- An architecture diagram in which the I/O working mode between the front-end device and the back-end device of multiple VMs in the virtualization platform provided by the present invention is a shared mode;
- FIG. 4 is an architecture diagram of an I/O working mode of a hybrid mode between a front end device and a back end device of multiple VMs in a virtualization platform provided by the present invention
- FIG. 5 is a schematic structural diagram of an apparatus for adjusting an I/O channel under a virtualization platform according to an embodiment of the present invention
- FIG. 6 is a schematic structural diagram of an apparatus for adjusting an I/O channel under a virtualization platform according to another embodiment of the present invention.
- FIG. 7 is a schematic structural diagram of a host HOST according to an embodiment of the present invention. detailed description
- Host HOST used as management layer to manage and allocate hardware resources; for virtual machine resources, such as providing virtual processors (such as VCPU), virtual memory, virtual disks, virtual network cards, etc. Wait.
- the virtual disk can correspond to a file of HOST or a logical block device.
- the virtual machine runs on the virtual hardware platform that HOST prepares for it, and one or more virtual machines are running on the HOST.
- Virtual Machine VM Virtual machine software can simulate one or more virtual computers on a single physical computer. These virtual machines work like real computers, and operating systems and applications can be installed on virtual machines. Virtual machines also have access to network resources. For an application running in a virtual machine, the virtual machine is like working on a real computer.
- a data processing module is introduced between the front end device of the VM and the back end device, and the data processing module is configured to process data transmission between the front end device and the back end device of the VM, and is processed by the worker thread.
- the data processing module is generally implemented by software, that is, by a processor reading a special function software code instruction.
- the Native Device may include various hardware.
- the Native Device of a computing node may include a processor (such as a CPU) and a memory, and may also include a network card, a memory, and the like, and a high speed/low speed input/output (I/O, Input/ Output ) device.
- Bridge Bridge Bridge A network device or software between the backend device of the VM and the local device Native Device of the host HOST, which implements the network interconnection between the backend device of the VM and the local device of the host HOST.
- the data frame is forwarded. Specifically include:
- the host HOST counts the average I/O throughput of the plurality of virtual machine VMs running on the HOST at the current time.
- the HOST may first calculate the total I/O throughput of the plurality of VMs running on the HOST, and divide the number of virtual machines running on the HOST to obtain the current VMs. Average I/O throughput.
- a worker thread for processing the VM is added between the front end device and the back end device of the VM; or, when the average I/O throughput at the current moment is less than the second threshold, the HOST is at the front end device of the plurality of VMs and Work threads for processing VMs are reduced between end devices.
- the working thread for processing the VM is added between the front end device and the back end device of the plurality of VMs, so that the average I/O throughput of the plurality of VMs after the working thread is increased is greater than the first threshold;
- a worker thread for processing the VM is reduced between the front end device and the back end device of the plurality of VMs such that the average I/O throughput of the plurality of VMs after the worker thread is reduced is greater than the second threshold.
- the first threshold is greater than a second threshold, the first threshold is used to indicate an upper limit of the average I/O throughput of the plurality of VMs, and the second threshold is used to indicate an average I/O throughput of the plurality of VMs.
- the lower limit of the quantity, the first threshold reflects the upper limit of the maximum I/O throughput that a single VM can withstand.
- the second threshold reflects the lower limit of the I/O throughput that a single VM should bear at the minimum.
- the HOST includes:
- Changing HOST will increase the increase in CPU utilization and the increase in I/O throughput brought by the worker threads that process VMs between the front-end devices and back-end devices of the multiple VMs;
- the increase in I/O throughput is greater than the increase in CPU utilization, and the HOST is executed to add a worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs.
- the increase in CPU utilization refers to increasing the CPU utilization of the worker thread for processing the VM relative to the worker thread that does not increase the processing VM, and can increase the CPU utilization and/or
- the increase in CPU utilization is expressed;
- the resulting increase in I/O throughput refers to an increase in the I/O throughput of the worker thread used to process the VM relative to the increased processing of the worker thread that is not used to process the VM.
- the present invention does not limit how to compare the increase of CPU utilization and the increase of I/O throughput.
- two methods are given exemplarily, if the increase of I/O throughput is greater than that of the CPU.
- the increase in utilization, or the growth rate of I/O throughput is greater than the growth rate of CPU utilization, then it is determined to increase the worker thread used to process the VM.
- the HOST further includes:
- the multiple VMs may also be prioritized, so that the high-priority VM maintains exclusive use of the working thread, and enjoys a proprietary I/O channel, regardless of the overall I/O load of the host HOST, high priority.
- Level 1 VM exclusive I/O channel resources are not affected; For VMs with the same priority level, they are processed according to the above method of increasing or decreasing worker threads.
- the I/O channel resource includes a worker thread for processing the VM, and a queue in the front-end device and the back-end device of the VM.
- the HOST adjusts a correspondence between a queue in the front-end device and the back-end device of the plurality of VMs and a worker thread for processing the VM according to the added or decreased working thread for processing the VM.
- the foregoing correspondence includes: a correspondence between a queue in the front end device of the plurality of VMs and a worker thread for processing the VM, and a queue in the backend device of the plurality of VMs and a worker thread for processing the VM.
- the HOST adjusts the correspondence respectively to form a plurality of data transmission channels between the front end device of the plurality of VMs and the back end devices of the plurality of VMs.
- the HOST adjusts, according to the added or decreased working thread for processing the VM, a correspondence between a queue in the front end device of the plurality of VMs and a working thread for processing the VM, and a plurality of VMs.
- the HOST will work each time after the increase or decrease of the number of worker threads for processing the VM is smaller than the number of VMs running on the HOST
- the thread corresponds to one queue in each front-end device of each VM and one queue in each back-end device of each VM; or, the number of working threads for processing VM after increasing or decreasing is greater than or equal to that of running VM on HOST
- the HOST will exclusively occupy a queue of the front-end device and the back-end device of one VM of the working thread, and a queue of the front-end device and the back-end device corresponding to the shared worker thread corresponding to at least two VMs that are not exclusive working threads
- the worker threads for processing the VM include an exclusive worker thread and a shared worker thread. It should be noted that the above two adjustment modes respectively correspond to the sharing mode and the hybrid mode, and the descriptions shown in FIG. 3 and FIG. 4 can be specifically
- the method for adjusting the I/O channel under the virtualization platform further includes:
- the HOST adjusts a correspondence between a queue in the backend device of the multiple VMs and a queue in the local device Native Device in the HOST, so as to form multiple data transmissions between the backend device of the multiple VMs and the Native Device. aisle.
- the Native Device may have multiple queues, and data in the backend device of the VM may perform queue selection when entering the Native Device to implement data transmission through different queues, and the technology may be Hardware driver implementation in Native Device.
- the Native Device can also select more of the backend devices of the VM when transmitting data to the backend device of the VM through the bridge bridge. Queues to implement multiple data transmission channels between the VM's backend device and the Native Device. Therefore, the correspondence between the columns is actually how the Native Device selects the queue in the backend device of the VM when it sends data to the VM's backend device through the bridge bridge.
- the queue in the backend device of the Native Device can be used to maintain the consistency of the channel for transmitting data between the queue in the backend device of the VM and the queue in the Native Device.
- the Native Device re-selects different queues in the VM's backend device based on the attributes of the data stream (such as from the same source or other).
- the present invention implements the corresponding relationship between the queue in the backend device of the plurality of VMs and the queue in the local device Native Device in the HOST.
- a plurality of data transmission channels are formed between the backend device of the plurality of VMs and the local device Native Device by adjusting a correspondence between the backend devices of the plurality of VMs and the local device Native Device of the HOST.
- a plurality of I/O channels between the front end device of the plurality of VMs and the local device Native Device of the HOST are implemented, and the data transmission capability between the plurality of VMs and the native device of the HOST local device can be improved.
- the HOST determines whether to increase or decrease the working thread for processing the VM between the front-end device and the back-end device of the plurality of VMs according to the average I/O throughput of the plurality of VMs at the current time.
- the working thread for processing the VM is increased, that is, the I/O channel resources are increased, and the data transmission capability of the I/O channel is improved;
- the average I/O throughput is less than the second threshold, reduce the worker thread used to process the VM, ie Reduce I/O channel resources and avoid wasting I/O channel resources.
- the present invention sets two I/O working modes between the front end device and the back end device of the plurality of VMs, including a sharing mode and a hybrid mode, and the two The I/O working mode can be switched to each other, and when a certain condition is met, one working mode can be switched to another working mode.
- the HOST adjusts the I/O working mode between the front end device and the back end device of the multiple VMs to the sharing mode, that is,
- the worker thread on the data processing module captures the shared mode to process the data of the queues of the front end device and the back end device of the plurality of VMs.
- the working threads on the data processing module respectively correspond to the front end of each VM running on the HOST.
- FIG. 3 is a schematic diagram of the I/O working mode of the VM between the front-end device and the back-end device in a shared mode.
- the VMs running on the host HOST are VM1, VM2, and VM3, respectively, and work on the data processing module.
- the threads are respectively worker thread 1 and worker thread 2, wherein worker thread 1 processes queue 1 in the front-end device and the back-end device in VM1, VM2, and VM3, respectively, and worker thread 2 processes the front-end device in VM1, VM2, and VM3, respectively. Queue 2 in the end device.
- the HOST adjusts the I/O working mode between the front-end device and the back-end device of the plurality of VMs to be mixed.
- the mode that is, the worker thread on the data processing module captures the mixed mode to process the data of the queues in the front-end device and the back-end device of the VM.
- the working threads on the data processing module can be divided into exclusive working threads and shared working threads.
- the data of the queue processed by the thread are not exclusively operated.
- FIG. 4 is a schematic diagram of the I/O working mode of the VM between the front-end device and the back-end device in a mixed mode.
- the VMs running on the host HOST are VM1, VM2, and VM3, respectively, and work on the data processing module.
- the data of queue 2 in the front-end device and the back-end device of each of VMs VM1, VM2, and VM3 is shared.
- FIG. 4 only illustrates the case where the shared worker thread processes data of one queue of the front-end device and the back-end device of each of the at least two VMs, and the shared worker thread can also process data of multiple queues in other situations, and the present invention compares No restrictions.
- the host HOST in addition to forming a plurality of I/O channels between the front end device and the back end device of the plurality of VMs, the host HOST also adjusts the queues in the backend devices of the plurality of VMs.
- a plurality of data transmission channels are formed between the backend device of the plurality of VMs and the local device Native Device, and the front end device of the plurality of VMs and the HOST are implemented.
- a plurality of I/O channels between the native devices of the local device improve the data transmission capability between the front-end devices of the plurality of VMs and the native device of the local device in the HOST.
- the adjusting device 500 specifically includes:
- the statistics module 501 is configured to count the average I/O throughput of the plurality of virtual machine VMs running on the host HOST at the current moment.
- the processing module 502 is connected to the statistic module 5001, and is configured to increase the average I/O throughput of the plurality of VMs between the front end device and the back end device when the average I/O throughput at the current time counted by the statistic module 501 is greater than the first threshold.
- the working thread for processing the VM is reduced between the front end device and the back end device of the plurality of VMs, so that the average I/O throughput of the plurality of VMs after the working thread is reduced is greater than Two thresholds; wherein the first threshold is greater than the second threshold.
- the first adjustment module 503 is connected to the processing module 502, and is configured to adjust a queue in the front end device of the plurality of VMs and work for processing the VM according to the working thread for processing the VM according to the processing module 502. Corresponding relationship between the thread, and a correspondence between the queue in the backend device of the plurality of VMs and the worker thread for processing the VM, so as to facilitate the front end device and the plurality of VMs A plurality of data transmission channels are formed between the backend devices of the VM.
- the adjusting device 500 further includes:
- the determining module 504 when the average I/O throughput of the current time counted by the statistics module 501 is greater than the first threshold, adding work for processing the VM between the front end device and the back end device of the plurality of VMs
- the increase in CPU utilization caused by the thread is compared with the increase in I/O throughput brought in;
- the processing module 502 is further configured to determine, at the determining module 504, that if the increase in the I/O throughput is greater than the CPU utilization a rate increase, a worker thread for processing the VM is added between the front end device and the back end device of the plurality of VMs, so that an average I/O throughput of the plurality of VMs after the worker thread is increased is less than the first threshold .
- the adjusting device 500 further includes:
- the determining module 504 is configured to: when the average I/O throughput d of the current time counted by the statistics module 501 is greater than the second threshold, determine that the VM is processed between the front-end device and the back-end device of the plurality of VMs Whether the reduction in CPU utilization caused by the worker thread causes the throughput of the multiple VMs to be unresponsive;
- the processing module 502 is further configured to: at the determining module 504, determine that if the reduction of the CPU utilization caused by the worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs is not caused to fail to respond to the The throughput of the plurality of VMs is reduced between the front-end device and the back-end device of the plurality of VMs, so that the average I/O throughput of the plurality of VMs after the worker threads is reduced is greater than Two values.
- first adjustment module 503 is specifically configured to:
- each worker thread corresponds to one queue of each front-end device of each VM and the back-end device of each VM. Or one queue; or, when the number of worker threads for processing VMs increased or decreased is greater than or equal to the number of VMs running on the HOST, the exclusive worker thread corresponds to one of the front-end device and the back-end device of one VM.
- the queue, and the front-end device and the back-end device corresponding to the at least two VMs that share the worker thread are not queued by the exclusive worker thread, wherein the worker threads for processing the VM include an exclusive worker thread and a shared worker thread.
- the adjusting apparatus 500 further includes a second adjusting module 505, configured to adjust a correspondence between a queue in the backend device of the plurality of VMs and a queue in the local device Native Device in the HOST, so as to be behind the plurality of VMs.
- a plurality of data transmission channels are formed between the end device and the Native Device. Since the second adjustment module 505 forms a plurality of data channels between the backend device of the plurality of VMs and the local device Native Device in the host HOST, the front end device of the plurality of VMs and the local device Native Device of the HOST are implemented. Multiple I/O channels between each other to improve the data transmission capability between the multiple VMs and the HOST local device Native Device.
- the adjusting device of the I/O channel under the virtualization platform determines whether to increase or decrease between the front end device and the back end device of the plurality of VMs according to the average I/O throughput of the plurality of VMs at the current time.
- the working thread for processing the VM increases the working thread for processing the VM when the average I/O throughput of the multiple VMs is greater than the first threshold at the current moment, that is, increases the I/O channel resources, and improves the I/O channel.
- the data transmission capability when the average I/O throughput of multiple VMs is less than the second threshold at the current moment, the working thread for processing the VM is reduced, that is, the I/O channel resources are reduced, and the I/O channel resource is wasted.
- FIG. 6 illustrates a structure of an apparatus 1 for adjusting an I/O channel under a virtualization platform according to another embodiment of the present invention.
- the adjustment apparatus 600 includes: at least one processor 601, such as a CPU, at least one network interface 604, or other users. Interface 603, memory 605, at least one communication bus 602. Communication bus 602 is used to implement connection communication between these components.
- the HOST 600 optionally includes a user interface 603 that includes a display, keyboard or pointing device (e.g., mouse, trackball, touchpad or tactile display).
- the memory 605 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
- the memory 605 can optionally include at least one storage device located remotely from the aforementioned processor 601.
- the memory 605 can also include an operating system 606 that includes various programs for implementing various basic services and for processing hardware-based tasks.
- the processor 601 is configured to:
- Adjusting, according to the increased or decreased working thread for processing the VM, a correspondence between a queue in the front end device of the plurality of VMs and a working thread for processing the VM, and a back end of the plurality of VMs A correspondence between a queue in the device and a worker thread for processing the VM, so as to form a plurality of data transmission channels between the front end device of the plurality of VMs and the back end device of the plurality of VMs.
- the processor 601 further includes: before adding the working thread for processing the VM between the front end device and the back end device of the multiple VMs, the method further includes: Comparing the increase in CPU utilization and the increase in I/O throughput brought by the worker threads for processing VMs between the front-end devices and the back-end devices of the plurality of VMs; if I/ The increase in O throughput is greater than the increase in CPU utilization, and the step of adding a worker thread for processing the VM between the front end device and the back end device of the plurality of VMs is performed.
- the processor 601 before the step of reducing the working thread for processing the VM between the front-end device and the back-end device of the plurality of VMs further includes: determining Whether the reduction in CPU utilization caused by the worker thread for processing the VM between the front end device and the back end device of the plurality of VMs causes failure to respond to the throughput of the plurality of VMs; if at the front end of the plurality of VMs Reducing the reduction in CPU utilization between the device and the back-end device by the worker thread for processing the VM does not result in failure to respond to the throughput of the plurality of VMs, and executing the front-end device and the back-end of the plurality of VMs The steps for processing the worker threads of the VM are reduced between devices.
- the processor 601 is configured to separately adjust a correspondence between a queue in the front end device of the plurality of VMs and a working thread for processing the VM according to the added or decreased working thread for processing the VM, and the plurality of VMs Correspondence between the queues in the backend device and the worker threads used to process the VM, including:
- the number of worker threads used to process VMs after increasing or decreasing is less than running on HOST
- each worker thread corresponds to one queue in each VM's front-end device and one queue in each VM's back-end device; or, the number of worker threads used to process VMs after increasing or decreasing
- the queues of the front-end device and the back-end device corresponding to one VM of the exclusive worker thread, and the front-end device and the back-end device of the at least two VMs that share the worker thread are not The queue corresponding to the worker thread; wherein the worker threads for processing the VM include an exclusive worker thread and a shared worker thread.
- the processor 601 is further configured to adjust a correspondence between a queue in the backend device of the multiple VMs and a queue in the local device Native Device in the HOST, so as to facilitate the backend device and the Native Device in the multiple VMs.
- a plurality of data transmission channels are formed between each other.
- the adjustment device of the I/O channel under the virtualization platform is based on the current time.
- the average I/O throughput of the VMs determines whether the worker threads for processing the VMs are increased or decreased between the front-end devices and the back-end devices of the plurality of VMs, and the average I/O throughput of the plurality of VMs is greater than the current time.
- the working thread for processing the VM is increased, that is, the I/O channel resource is increased, and the data transmission capability of the I/O channel is improved; the average I/O throughput of multiple VMs is less than the second at the current moment.
- the working thread for processing the VM is reduced, that is, the I/O channel resources are reduced, and the I/O channel resource is wasted.
- Processor 601 may be an integrated circuit chip with the ability to execute instructions and data, as well as signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 601 or an instruction in a form of software.
- the above processor may be a general purpose processor (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or a transistor logic step.
- the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
- the steps of the method disclosed in the embodiments of the present invention may be directly implemented as hardware processor execution completion, or performed by a combination of hardware and software modules in the processor.
- the software modules can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
- the storage medium is located in the memory 605, and the processor reads the information in the memory 605 and combines the hardware to perform the steps of the above method.
- FIG. 7 illustrates a structure of a host HOST 700 according to an embodiment of the present invention.
- the HOST includes a local device Native Device 705, and a front end device and a back end device of a plurality of virtual machine VMs running on the Native Device 705. a data processing module 702 between the front end device and the back end device of the VM, and a bridge Bridge 704 located between the back end device of the plurality of VMs and the Native Device 705
- the front end device of the plurality of VMs includes a VM1 front end device 7011 and a VM2 front end device 7012; a back end device of the plurality of VMs, including a VM1 back end device 7031 and a VM2 back end device 7032; the bridge Bridge 704 is located at the back end device of the VM
- the network device or software between the local device and the local device of the host HOST implements the network interconnection between the VM's backend device and the local device of the host HOST, and forwards the data frame.
- the local device Native Device 705 is The hardware platform running in the virtualized environment, the Native Device may include a variety of hardware, for example, the Native Device of a computing node may include a processor (such as a CPU) and a memory, and may also include a network card, a memory, etc. Low-speed input/output (I/O, Input/Output) devices.
- the Native Device of a computing node may include a processor (such as a CPU) and a memory, and may also include a network card, a memory, etc.
- Low-speed input/output (I/O, Input/Output) devices Low-speed input/output (I/O, Input/Output) devices.
- Data processing module 702 is used to:
- a worker thread for processing the VM is added between the front end device and the back end device of the plurality of VMs, so that the plurality of working threads are added
- the average I/O throughput of the VM is less than the first threshold; or, when the average I/O throughput at the current moment is less than the second threshold, the reduction between the front-end device and the back-end device of the multiple VMs is used for Processing the working thread of the VM, so that the average I/O throughput of the plurality of VMs after the working thread is reduced is greater than a second threshold; wherein the first threshold is greater than the second threshold;
- the working thread for processing the VM after increasing or decreasing, respectively adjusting a correspondence between a queue in the front end device of the plurality of VMs and a working thread for processing the VM, and a queue in the back end device of the plurality of VMs Corresponding relationship with a worker thread for processing the VM, so as to form a plurality of data transmission channels between the front end device of the plurality of VMs and the back end device of the plurality of VMs.
- the data processing module 702 is further configured to:
- the data processing module 702 is further configured to:
- the data processing module 702 can perform the method disclosed in the first embodiment, and the present invention is not described herein, and is not to be construed as limiting the method disclosed in the first embodiment; and, the data processing module 702 is generally The software implementation is implemented by the processor reading the special function software code instructions, and the data processing unit 702 is implemented by the software method, which is only a preferred implementation of the present invention. Those skilled in the art can also implement the software method of the data processing unit 702 by using hardware logic such as a CPU (DSP, DSP), which is not limited by the present invention.
- DSP CPU
- the host HOST determines whether to increase or decrease the working thread for processing the VM between the front-end device and the back-end device of the plurality of VMs according to the average I/O throughput of the plurality of VMs at the current time.
- the working thread for processing the VM is increased, that is, the I/O channel resources are increased, and the data transmission capability of the I/O channel is improved;
- the working threads for processing the VMs are reduced, that is, the I/O channel resources are reduced, and the I/O channel resources are wasted.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- General Factory Administration (AREA)
- Bus Control (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020137034172A KR101559097B1 (ko) | 2013-01-24 | 2013-08-05 | 가상 플랫폼 상의 i/o 채널 조정 방법 및 장치 |
EP13802520.0A EP2772854B1 (en) | 2013-01-24 | 2013-08-05 | Regulation method and regulation device for i/o channels in virtualization platform |
JP2014557993A JP5923627B2 (ja) | 2013-01-24 | 2013-08-05 | 仮想プラットフォーム上でi/oチャネルを調整する方法及び装置 |
RU2013158942/08A RU2573733C1 (ru) | 2013-01-24 | 2013-08-05 | Способ и устройство для регулировки канала i/о на виртуальной платформе |
AU2013273688A AU2013273688B2 (en) | 2013-01-24 | 2013-08-05 | Method and apparatus for adjusting I/O channel on virtual platform |
US14/108,804 US8819685B2 (en) | 2013-01-24 | 2013-12-17 | Method and apparatus for adjusting I/O channel on virtual platform |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310027312.7A CN103116517B (zh) | 2013-01-24 | 2013-01-24 | 虚拟化平台下i/o通道的调整方法和调整装置 |
CN201310027312.7 | 2013-01-24 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/108,804 Continuation US8819685B2 (en) | 2013-01-24 | 2013-12-17 | Method and apparatus for adjusting I/O channel on virtual platform |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014114072A1 true WO2014114072A1 (zh) | 2014-07-31 |
Family
ID=48414901
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/080837 WO2014114072A1 (zh) | 2013-01-24 | 2013-08-05 | 虚拟化平台下i/o通道的调整方法和调整装置 |
Country Status (7)
Country | Link |
---|---|
EP (1) | EP2772854B1 (zh) |
JP (1) | JP5923627B2 (zh) |
KR (1) | KR101559097B1 (zh) |
CN (1) | CN103116517B (zh) |
AU (1) | AU2013273688B2 (zh) |
RU (1) | RU2573733C1 (zh) |
WO (1) | WO2014114072A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109857464A (zh) * | 2017-11-30 | 2019-06-07 | 财团法人工业技术研究院 | 用于平台部署与操作移动操作系统的系统及其方法 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100578350B1 (ko) * | 2002-08-29 | 2006-05-11 | 엘지전자 주식회사 | 진공청소기의 집진케이스 |
US8819685B2 (en) | 2013-01-24 | 2014-08-26 | Huawei Technologies Co., Ltd. | Method and apparatus for adjusting I/O channel on virtual platform |
CN103116517B (zh) * | 2013-01-24 | 2016-09-14 | 华为技术有限公司 | 虚拟化平台下i/o通道的调整方法和调整装置 |
WO2015168946A1 (zh) * | 2014-05-09 | 2015-11-12 | 华为技术有限公司 | 快速输入输出报文处理方法、装置及系统 |
WO2016101282A1 (zh) * | 2014-12-27 | 2016-06-30 | 华为技术有限公司 | 一种i/o任务处理的方法、设备和系统 |
CN109240802B (zh) * | 2018-09-21 | 2022-02-18 | 北京百度网讯科技有限公司 | 请求处理方法和装置 |
KR102212512B1 (ko) * | 2019-02-28 | 2021-02-04 | 성균관대학교산학협력단 | 가상화기술에서 뮤텍스 객체를 이용한 소프트웨어 기반 은닉채널 구성 시스템 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020049869A1 (en) * | 2000-10-25 | 2002-04-25 | Fujitsu Limited | Virtual computer system and method for swapping input/output devices between virtual machines and computer readable storage medium |
CN102317917A (zh) * | 2011-06-30 | 2012-01-11 | 华为技术有限公司 | 热点域虚拟机cpu调度方法及虚拟机系统 |
CN102508718A (zh) * | 2011-11-22 | 2012-06-20 | 杭州华三通信技术有限公司 | 一种虚拟机负载均衡方法和装置 |
CN103116517A (zh) * | 2013-01-24 | 2013-05-22 | 华为技术有限公司 | 虚拟化平台下i/o通道的调整方法和调整装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7788665B2 (en) * | 2006-02-28 | 2010-08-31 | Microsoft Corporation | Migrating a virtual machine that owns a resource such as a hardware device |
US8443398B2 (en) * | 2006-11-01 | 2013-05-14 | Skyfire Labs, Inc. | Architecture for delivery of video content responsive to remote interaction |
CN101499021A (zh) | 2008-01-31 | 2009-08-05 | 国际商业机器公司 | 在多个虚拟机上动态分配资源的方法和装置 |
US8387059B2 (en) * | 2008-07-02 | 2013-02-26 | International Business Machines Corporation | Black-box performance control for high-volume throughput-centric systems |
US9152464B2 (en) * | 2010-09-03 | 2015-10-06 | Ianywhere Solutions, Inc. | Adjusting a server multiprogramming level based on collected throughput values |
JP2012234425A (ja) * | 2011-05-06 | 2012-11-29 | Canon Inc | 画像処理装置及び画像処理方法 |
CN102591702B (zh) * | 2011-12-31 | 2015-04-15 | 华为技术有限公司 | 虚拟化处理方法及相关装置和计算机系统 |
-
2013
- 2013-01-24 CN CN201310027312.7A patent/CN103116517B/zh active Active
- 2013-08-05 KR KR1020137034172A patent/KR101559097B1/ko active IP Right Grant
- 2013-08-05 WO PCT/CN2013/080837 patent/WO2014114072A1/zh active Application Filing
- 2013-08-05 AU AU2013273688A patent/AU2013273688B2/en active Active
- 2013-08-05 JP JP2014557993A patent/JP5923627B2/ja active Active
- 2013-08-05 RU RU2013158942/08A patent/RU2573733C1/ru active
- 2013-08-05 EP EP13802520.0A patent/EP2772854B1/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020049869A1 (en) * | 2000-10-25 | 2002-04-25 | Fujitsu Limited | Virtual computer system and method for swapping input/output devices between virtual machines and computer readable storage medium |
CN102317917A (zh) * | 2011-06-30 | 2012-01-11 | 华为技术有限公司 | 热点域虚拟机cpu调度方法及虚拟机系统 |
CN102508718A (zh) * | 2011-11-22 | 2012-06-20 | 杭州华三通信技术有限公司 | 一种虚拟机负载均衡方法和装置 |
CN103116517A (zh) * | 2013-01-24 | 2013-05-22 | 华为技术有限公司 | 虚拟化平台下i/o通道的调整方法和调整装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109857464A (zh) * | 2017-11-30 | 2019-06-07 | 财团法人工业技术研究院 | 用于平台部署与操作移动操作系统的系统及其方法 |
CN109857464B (zh) * | 2017-11-30 | 2022-02-25 | 财团法人工业技术研究院 | 用于平台部署与操作移动操作系统的系统及其方法 |
Also Published As
Publication number | Publication date |
---|---|
AU2013273688B2 (en) | 2015-08-06 |
EP2772854A4 (en) | 2014-11-19 |
AU2013273688A1 (en) | 2014-08-07 |
EP2772854A1 (en) | 2014-09-03 |
KR101559097B1 (ko) | 2015-10-08 |
JP2015513732A (ja) | 2015-05-14 |
JP5923627B2 (ja) | 2016-05-24 |
RU2573733C1 (ru) | 2016-01-27 |
CN103116517B (zh) | 2016-09-14 |
KR20140119624A (ko) | 2014-10-10 |
CN103116517A (zh) | 2013-05-22 |
EP2772854B1 (en) | 2018-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014114072A1 (zh) | 虚拟化平台下i/o通道的调整方法和调整装置 | |
US10325343B1 (en) | Topology aware grouping and provisioning of GPU resources in GPU-as-a-Service platform | |
US10552222B2 (en) | Task scheduling method and apparatus on heterogeneous multi-core reconfigurable computing platform | |
US9459904B2 (en) | NUMA I/O aware network queue assignments | |
US9710310B2 (en) | Dynamically configurable hardware queues for dispatching jobs to a plurality of hardware acceleration engines | |
JP5689526B2 (ja) | マルチキュー・ネットワーク・アダプタの動的再構成によるリソース・アフィニティ | |
US9092269B2 (en) | Offloading virtual machine flows to physical queues | |
US8819685B2 (en) | Method and apparatus for adjusting I/O channel on virtual platform | |
WO2016078178A1 (zh) | 一种虚拟cpu调度方法 | |
US20090055831A1 (en) | Allocating Network Adapter Resources Among Logical Partitions | |
KR102309798B1 (ko) | Sr-iov 기반 비휘발성 메모리 컨트롤러 및 그 비휘발성 메모리 컨트롤러에 의해 큐에 리소스를 동적 할당하는 방법 | |
US9389921B2 (en) | System and method for flexible device driver resource allocation | |
US10489208B1 (en) | Managing resource bursting | |
WO2015101091A1 (zh) | 一种分布式资源调度方法及装置 | |
WO2022271239A1 (en) | Queue scaling based, at least, in part, on processing load | |
US20190044832A1 (en) | Technologies for optimized quality of service acceleration | |
Ekane et al. | FlexVF: Adaptive network device services in a virtualized environment | |
Silva et al. | VM performance isolation to support QoS in cloud | |
WO2023159652A1 (zh) | 一种ai系统、内存访问控制方法及相关设备 | |
WO2024027395A1 (zh) | 一种数据处理方法及装置 | |
US20220058062A1 (en) | System resource allocation for code execution | |
US10877552B1 (en) | Dynamic power reduction through data transfer request limiting | |
WO2023173961A1 (zh) | 一种内存分配方法及相关产品 | |
Brunet et al. | Short paper: Dynamic optimization of communications over high speed networks | |
TALENCE | Short Paper: Dynamic Optimization of Communications over High Speed Networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2013802520 Country of ref document: EP Ref document number: 2013273688 Country of ref document: AU |
|
ENP | Entry into the national phase |
Ref document number: 20137034172 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2014557993 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2013158942 Country of ref document: RU Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13802520 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |