WO2015106497A1 - 一种基于当前vcpu调度状态的动态中断均衡映射方法 - Google Patents
一种基于当前vcpu调度状态的动态中断均衡映射方法 Download PDFInfo
- Publication number
- WO2015106497A1 WO2015106497A1 PCT/CN2014/075253 CN2014075253W WO2015106497A1 WO 2015106497 A1 WO2015106497 A1 WO 2015106497A1 CN 2014075253 W CN2014075253 W CN 2014075253W WO 2015106497 A1 WO2015106497 A1 WO 2015106497A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vcpu
- interrupt
- virtual
- active
- target
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4812—Task transfer initiation or dispatching by interrupt, e.g. masked
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/24—Handling requests for interconnection or transfer for access to input/output bus using interrupt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45504—Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/461—Saving or restoring of program or task context
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45579—I/O management, e.g. providing access to device drivers or storage
Definitions
- the present invention relates to the field of computer system virtualization, virtual machine interrupt processing, and virtual machine scheduler. Specifically, the present invention relates to a dynamic interrupt equalization mapping method based on a current VCPU scheduling state.
- Virtualization technology usually integrates the computing or storage functions that need to be implemented by multiple physical devices into a relatively powerful physical server, thereby realizing the integration and redistribution of hardware resources and improving the utilization of hardware devices.
- Cloud computing and data center construction play a very important role.
- Virtual machines have many obvious advantages over real physical devices.
- cloud computing and virtualization technologies allow enterprise users to use virtual machines for jobs, so users do not need to purchase a real-world IT device, effectively reducing the economic and management costs of enterprise users in maintaining their IT infrastructure.
- the mirror-based virtual machine creation method is flexible and practical.
- the image backup technology can handle server disaster recovery, reconstruction, and batch replication.
- VMM virtual machine monitor
- the virtual machine management mode of Monitor and VMM can flexibly handle the mapping relationship between physical resources and virtual resources from the hardware and software level, and provide a series of necessary functions such as performance isolation, security isolation, and status monitoring.
- a virtual machine monitor is a software management layer that exists between hardware and a traditional operating system. Its main function is to manage real physical devices, such as physical CPUs, memory, etc., and abstract the underlying hardware into corresponding virtual device interfaces. Enables multiple operating systems to get the virtual hardware they need so they can run on the same physical device at the same time.
- the virtual machine monitor introduced between the physical device and the virtual operating system acts as an intermediate layer, which inevitably affects the performance of the virtual operating system.
- One important aspect is to increase the response delay of the interrupt.
- the physical CPU needs to be time-multiplexed.
- the waiting time of the VCPU in the VMM scheduling queue is reflected in the response time of the VCPU to the event, which causes the response delay of events such as interrupts.
- the response delay of events such as interrupts.
- the virtual interrupt processing device is mainly used to handle interrupts in the virtual machine.
- virtual machines are divided into two types according to virtualization, namely full virtualization and paravirtualization.
- the virtual operating system does not need any modification, XEN implements virtual interrupt processing through the virtual interrupt processing platform; in paravirtualization, the virtual operating system kernel needs to be modified to adapt to the host operating system.
- XEN implements interrupt and event processing through the event channel mechanism.
- KVM does not have the distinction between full virtualization and paravirtualization. Virtual machine interrupts are handled in KVM through virtual interrupt processing devices.
- each VCPU corresponds to a virtual local APIC (Advanced Programmable Interrupt Controller) due to reception interruption.
- Virtual platform also includes virtual I/O Virtual devices such as APIC and virtual PIC send interrupts.
- virtual I/O APIC, virtual local APIC, and virtual PIC are all software entities maintained by VMM.
- the virtual device of the SMP architecture virtual machine such as a virtual network card, invokes the virtual I/O APCI to issue an interrupt request, virtual I/O.
- the APIC selects a VCPU as the receiver of the interrupt according to the mapping relationship between the interrupt and the VCPU, and sends the interrupt request to the virtual local APIC of the target VCPU.
- the virtual local APIC further utilizes the event injection mechanism of VT-x to finally complete the injection of virtual interrupts.
- each VCPU performs time-division multiplexing on the physical CPU under the scheduler scheduling, and then some VCPUs are active and some VCPUs are in a queued state.
- Existing virtual machine interrupt processing technology when virtual I / O
- the APIC needs to map an interrupt request to the VCPU, it does not consider the current scheduling state of the VCPU and blindly allocates the VCPU for the interrupt.
- the virtual interrupt is allocated to the VCPU in the scheduling wait state, the waiting delay of the VCPU in the scheduling queue becomes part of the response delay of the interrupt, thereby greatly improving the response delay of the interrupt request and reducing the processing rate of the interrupt request. .
- the technical problem to be solved by the present invention is to provide a dynamic interrupt equalization mapping method based on the current VCPU scheduling state and interrupt processing load analysis, and consider the VCPU as the target of the interrupt mapping while interrupting the mapping.
- the scheduling state of the structure in the VMM scheduler takes into account the interrupt load balancing between the various VCPUs, thereby effectively reducing the interrupt processing delay.
- the present invention provides a dynamic interrupt equalization mapping method based on current VCPU scheduling state and interrupt processing load analysis, which is characterized in that: virtual I/O of SMP virtual machine After receiving a virtual interrupt, the APIC needs to map the virtual interrupt to a VCPU of the virtual machine, and analyzes the VCPU in the active state according to the scheduling state of all VCPUs of the current VM in the VMM scheduler, and The virtual interrupt is mapped into the active VCPU for a lower interrupt processing delay. If multiple VCPUs are active at the same time, the interrupt load of each active VCPU is further considered, and the interrupt is mapped to the active VCPU with the lowest current load to achieve VCPU interrupt load balancing.
- the invention provides a dynamic interrupt equalization mapping method based on a current VCPU scheduling state, which is characterized in that the method comprises the following steps:
- Virtual I/O whenever virtual hardware generates a virtual interrupt APIC receives the virtual interrupt and transmits the virtual interrupt to the virtual local APIC of the target VCPU;
- the interrupt equalization allocator intercepts the virtual interrupt before the virtual interrupt is transferred to the virtual local APIC of the target VCPU;
- the interrupt equalization allocator looks at the scheduling status information provided by the scheduler, and obtains the active VCPU list;
- the interrupt equalization allocator reselects the target VCPU according to the active VCPU list
- the interrupt equalization allocator transfers the virtual interrupt to the virtual local APIC of the target VCPU reselected in step (3);
- the virtual local APIC of the target VCPU performs virtual interrupt injection to the target VCPU.
- step (3) includes the steps of:
- the target VCPU is selected according to the scheduling status information; if the number is 1, the VCPU in the active VCPU list is the target VCPU; if the number is greater than 1, according to the active state The interrupt load of the VCPU in the VCPU list is selected as the target VCPU.
- the VCPU When there is only one VCPU in the active VCPU list, the VCPU is selected as the target VCPU, and the virtual interrupt is mapped into the active VCPU to obtain a lower interrupt processing delay.
- the method for selecting the target VCPU includes the following steps:
- the VCPU that predicts the fastest entry into the active state is selected as the target VCPU, and the virtual interrupt is mapped to the active VCPU to obtain lower interrupt processing. Delay.
- the basis for predicting the VCPU that is the fastest to enter the active state in step (332) is that the VCPU is in an idle state.
- the VCPU in idle state may enter the active state as soon as possible. This is because the CREDIT scheduler is designed to allow the VCPU in the IDLE state to preempt when an acknowledgement event is required. Therefore, the VCPU selected in the idle state is the target VCPU, and the virtual interrupt is mapped to the In the VCPU, get a lower interrupt processing delay.
- the VCPU that predicts the fastest entry into the active state in step (332) is in the waiting queue position and the remaining credit value.
- the location of their respective waiting queues is prioritized, and the closer to the head of the VCPU, the more likely it is to enter the active state. Then, when there is a VCPU of the same rank, consider the remaining credit value of the VCPU, and select the remaining credit value as the target VCPU to obtain a lower interrupt processing delay.
- step (32) according to the interrupt load of the VCPU in the active VCPU list, the method for selecting the target VCPU includes the steps of:
- the virtual interrupt is mapped to the active VCPU with the lowest interrupt load, and the VCPU interrupt load balancing is realized while obtaining a lower interrupt processing delay.
- the virtual hardware in step (1) includes a physical device that is processed for the virtual device and/or via the VMM.
- step (5) the virtual interrupt injection is completed by the event injection mechanism of VT-x.
- a member variable sched_info is added on the basis of the virtual machine SHARED_INFO structure for recording the scheduling state of the VCPU.
- the swapped VCPU and the swapped VCPU complete the context exchange, the swapped VCPU becomes the active VCPU, and the sched_info member variable of the virtual machine SHARED_INFO structure to which the active VCPU belongs is marked. Active; the sched_info member variable of the SHARED_INFO structure of the virtual machine to which the VCPU is swapped is marked as waiting.
- the dynamic interrupt mapping method based on the current VCPU active state provided by the present invention has the following beneficial technical effects:
- the virtual interrupt is mapped to the active VCPU, which effectively reduces the interrupt request processing delay caused by the scheduling delay, thereby ensuring that the virtual interrupt is timely injected into the target VCPU, and the lower is obtained.
- the interrupt processing delay increases the interrupt response speed.
- the VCPU with the smallest interrupt load is selected as the target VCPU, further ensuring the load balance between the VCPUs, so that the load of the VCPU under the SMP structure is more symmetrical, thus promoting the SMP structure.
- the overall performance of each VCPU is balanced.
- the present invention effectively reduces the interrupt response delay while The waste of introducing additional computing power is not conducive to the operation of computationally intensive loads, ensuring high CPU utilization.
- FIG. 1 is a schematic diagram of an interrupt processing structure on a physical platform
- FIG. 2 is a schematic structural diagram of a virtual interrupt processor in an existing SMP virtual machine technology
- FIG. 3 is a schematic diagram of a virtual interrupt handler in accordance with an embodiment of the present invention.
- FIG. 4 is a schematic diagram of a process of establishing a virtual interrupt map of the virtual interrupt processor shown in FIG. 3.
- FIG. 1 is a schematic diagram of an interrupt processing structure on a physical platform.
- the I/O device first passes the interrupt controller (I/O APIC Or PIC) issues an interrupt request, which is sent to the system bus via the PCI bus.
- the local APIC component of the target CPU receives the interrupt and the target CPU begins processing the interrupt.
- FIG. 2 is a schematic structural diagram of a virtual interrupt processor in the existing SMP virtual machine technology.
- the VMM also needs to present a virtual interrupt architecture similar to the physical interrupt architecture for the guest operating system.
- each VCPU also corresponds to a virtual local.
- the APIC is used to receive interrupts, while the VMM provides a virtual I/O APIC for sending interrupts.
- a virtual device such as a virtual network card
- the virtual device first calls the virtual I/O.
- the interface provided by APIC sends an interrupt.
- Virtual I/O Based on the obtained interrupt request, the APIC selects a VCPU as the processor of the interrupt and hands the interrupt to the local APIC corresponding to the VCPU. Finally, the virtual local APIC uses the event injection mechanism of VT-x to inject the interrupt into the corresponding destination VCPU, and the VCPU handles the interrupt. Similar to VCPU, virtual local APIC, virtual I/O involved in virtual interrupt processor APIC, virtual PIC, etc. are also software entities maintained by VMM.
- FIG. 3 is a schematic diagram of a virtual interrupt handler in accordance with one embodiment of the present invention.
- the interrupt equalization allocator is added to the virtual interrupt processor in this embodiment, and the interrupt equalization allocator is in the original virtual I/O. Between APIC and virtual local APIC.
- the interrupt equalization allocator reads the basic scheduling information of the VCPU from the scheduler of the VMM, and analyzes the scheduling state of the VCPU of the virtual machine at the current moment, that is, which VCPUs are in an active state and which VCPUs are in a scheduling wait state.
- the interrupt equalization allocator intercepts the interrupt request and replaces the virtual I/O according to the existing knowledge.
- the APIC chooses to process the interrupted target VCPU, which in turn sends the interrupt to the virtual local APIC of the target VCPU, initiating interrupt injection.
- FIG. 4 is a schematic diagram of a process of establishing a virtual interrupt map of the virtual interrupt processor shown in FIG. 3. The figure illustrates the working process of the interrupt equalization allocator.
- the interrupt equalization allocator first analyzes the scheduling and interrupt load conditions of the respective VCPUs of the current virtual machine based on the scheduling information obtained from the scheduler.
- the VCPU is recorded as the target VCPU.
- the interrupt equalization allocator speculates on the possibility that each VCPU enters an active state, and the selected one can enter the active state at the fastest.
- the VCPU acts as the target VCPU. The guess is based on information provided by the scheduler, such as whether the VCPU is in the IDLE state, the remaining credit value of each VCPU under the CREDIT scheduler, and so on.
- the VCPU in the IDLE state is the most appropriate target choice because the CREDIT scheduler design allows the VCPU in the IDLE state to preempt when a response event is needed.
- the location of their respective waiting queues is prioritized, and the closer to the head of the VCPU, the more likely it is to enter the active state. Then, when there is a VCPU of the same rank, consider the remaining credit value of the VCPU, and select the remaining credit value as the target VCPU to obtain a lower interrupt processing delay.
- the interrupt equalization allocator sequentially reads the interrupt load of each active VCPU, and selects the VCPU of the minimum load as the target VCPU. After determining the target VCPU, the interrupt equalization allocator continues to interrupt the processing flow instead of virtual I/O. APIC initiates interrupt injection.
- Step 1 When the VCPU scheduler of the VMM has a valid VCPU switch, after the VCPU that is replaced and the VCPU that is swapped out complete the context exchange, the VCPU that is replaced becomes the active VCPU and actually runs on the physical CPU.
- the VCPU status that is swapped in is marked as active according to the ID of the VCPU, and the status of the VCPU that is swapped out is marked as waiting.
- Step 2 When the virtual device generates a virtual interrupt to be processed, the virtual interrupt is virtual I/O.
- the APIC intercepts the virtual interrupt before mapping to the VCPU, and simultaneously looks at the VCPU state table of the current virtual machine to obtain a list of VCPUs that are active. If there is only one VCPU in the active VCPU list, select it as the target VCPU and go to step 5. If there is no VCPU in the active state at the current moment, go to step 3. If more than one VCPU is active, go to step 4.
- Step 3 If the VCPU is not active at the current time, the VCPU that can enter the active state is estimated according to the VCPU scheduling information provided by the scheduler, and is selected as the target VCPU, and the process proceeds to step 5.
- Step 4 When there are multiple VCPUs in an active state, the VCPU interrupt load table maintained by the current virtual machine structure is further read, the current interrupt load of each active VCPU is compared, and the VCPU with the relatively smallest load is selected as the target VCPU.
- Step 5 Update the interrupt load state of the target VCPU, and map the current virtual interrupt to the dynamically selected target VCPU, and finally continue to complete the interrupt injection.
- the specific implementation steps of the interrupt mapping scheme of the present invention are mainly divided into two parts, and the first part of the steps are as follows:
- Step 1 Add a new member variable sched_info based on the SHARED_INFO structure of the original virtual machine to record the scheduling state of the VCPU.
- the main code for this step is implemented as follows:
- Step 2 When the VCPU scheduler of the VMM has a valid VCPU switch, after the VCPU that is replaced and the VCPU that is swapped out complete the context exchange, the VCPU that is replaced becomes the active VCPU, that is, actually runs on the physical CPU. At this time, the virtual machine structure to which the VCPU belongs marks the VCPU status that is swapped in as active according to the ID of the VCPU, and marks the status of the VCPU that is swapped out as waiting.
- the main code for this step is implemented as follows:
- Prev->domain->shared_info->native.sched_info ((prev->domain->shared_info->native.sched_info>>(prev->vcpu_id-1))-1) ⁇ (prev->vcpu_id-1 );
- the second part of the specific implementation steps of the present invention modifies the existing virtual I/O APIC's interrupt mapping scheme for virtual interrupts, the specific steps are as follows:
- Step 1 Virtual I/O whenever the VMM generates a virtual interrupt
- the APIC intercepts the virtual interrupt before the virtual interrupt is mapped to the VCPU, and simultaneously looks at the VCPU state table of the current virtual machine to obtain a list of VCPUs that are active. If there is only one VCPU in the active VCPU list, select it as the target VCPU and go to step 4. If there is no VCPU active at the current time, go to step 2. If more than one VCPU is active, go to step 3.
- Step 2 If the VCPU is not active at the current time, the VCPU that can enter the active state is estimated according to the VCPU scheduling information provided by the scheduler, and is selected as the target VCPU, and the process proceeds to step 4.
- Step 3 When there are multiple active VCPUs, further read the VCPU interrupt load table maintained by the current virtual machine structure, compare the current interrupt load of each active VCPU, and select the VCPU with the least load as the target VCPU. 4.
- Step 4 Update the interrupt load state of the target VCPU, and map the current virtual interrupt to the dynamically selected target VCPU, and continue to complete the interrupt injection.
- the core code of this part is implemented as follows:
- Target target_search()
- Target_vcpu most_likely_active(d);
- Tmp tmp>>1;
- Target_vcpu target_vcpu+1
- Tmp tmp>>1;
- Target_vcpu target_vcpu+1
- Target_vcpu min_load_vcpu(saved_active_vcpu()) ;
- the dynamic interrupt mapping method based on the current VCPU active state maps the virtual interrupt to the active VCPU according to the scheduling state of the current VCPU, thereby effectively reducing the interrupt request processing delay caused by the scheduling delay, thereby ensuring
- the virtual interrupt is injected into the target VCPU in time to obtain a lower interrupt processing delay and improve the interrupt response speed.
- the VCPU with the smallest interrupt load is selected as the target VCPU, further ensuring the VCPU between the VCPUs.
- the interrupt processing load balancing makes the load of the VCPU under the SMP structure more symmetrical, thereby promoting the balanced performance of the overall performance of each VCPU under the SMP structure.
- the present invention effectively reduces the interrupt response delay while introducing no additional
- the waste of computing power is conducive to the operation of computationally intensive load, ensuring high CPU utilization; since interrupts play an important role in the operating system, increasing the interrupt processing rate enables the overall responsiveness of the virtual machine to be obtained. A certain degree of improvement.
Abstract
Description
Claims (10)
- 一种基于当前VCPU调度状态的动态中断均衡映射方法,其特征在于,所述方法包括以下步骤:(1)每当虚拟硬件产生一个虚拟中断,虚拟I/O APIC接收所述虚拟中断,并将所述虚拟中断传送到目标VCPU的虚拟本地APIC;(2)中断均衡分配器在所述虚拟中断被传送到所述目标VCPU的虚拟本地APIC之前拦截所述虚拟中断;(3)所述中断均衡分配器分析调度器提供的调度状态信息,获得处于活跃状态的VCPU列表;(4)所述中断均衡分配器根据所述处于活跃状态的VCPU列表,重新选定目标VCPU;(5)所述中断均衡分配器将所述虚拟中断传送到步骤(3)中重新选定的目标VCPU的虚拟本地APIC;(6)所述目标VCPU的虚拟本地APIC向所述目标VCPU进行所述虚拟中断注入。
- 如权利要求1所述的方法,其特征在于,步骤(3)中选定目标VCPU的方法包括步骤:(31)获得所述处于活跃状态的VCPU列表中VCPU的个数;(32)如果所述个数为0,根据所述调度状态信息,选定所述目标VCPU;如果所述个数为1,选定所述处于活跃状态的VCPU列表中VCPU为所述目标VCPU;如果所述个数大于1,根据所述处于活跃状态的VCPU列表中VCPU的中断负载,选定所述目标VCPU。
- 如权利要求2所述的方法,其特征在于,步骤(32)中根据调度状态信息,选定所述目标VCPU的方法包括步骤:(321)读取所述调度状态信息;(332)选定所有VCPU中预测最快进入活跃状态的VCPU为所述目标VCPU。
- 如权利要求3所述的方法,其特征在于,步骤(332)中预测最快进入活跃状态的VCPU的依据是VCPU处于空闲状态。
- 如权利要求3所述的方法,其特征在于,如果所述VCPU均未处于空闲状态,步骤(332)中预测最快进入活跃状态的VCPU的依据是VCPU在等待队列中位置与剩余信用值。
- 如权利要求2所述的方法,其特征在于,步骤(32)中根据所述处于活跃状态的VCPU列表中VCPU的中断负载,选定目标VCPU的方法包括步骤:(323)读取当前虚拟机结构体维护的VCPU中断负载表;(334)比较所述处于活跃状态的VCPU列表中VCPU的中断负载,选定中断负载最小的VCPU为目标VCPU。
- 如权利要求1所述的方法,其特征在于,步骤(1)中所述虚拟硬件包括为虚拟设备和/或经VMM中断处理的物理设备。
- 如权利要求1所述的方法,其特征在于,步骤(5)中通过VT-x的事件注入机制完成所述虚拟中断注入。
- 如权利要求1所述的方法,其特征在于,在虚拟机SHARED_INFO结构的基础上增加成员变量sched_info,用于记录VCPU的调度状态。
- 如权利要求9所述的方法,其特征在于,每当调度器发生一次VCPU切换,被换入的VCPU和被换出的VCPU完成上下文交换后,被换入的VCPU成为活跃VCPU,所述活跃VCPU所属的虚拟机SHARED_INFO结构的sched_info成员变量标记为活跃;被换出的VCPU的所属的虚拟机SHARED_INFO结构的sched_info成员变量标记为等待。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/412,188 US9697041B2 (en) | 2014-01-15 | 2014-04-14 | Method for dynamic interrupt balanced mapping based on current scheduling states of VCPUs |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410018108.3 | 2014-01-15 | ||
CN201410018108.3A CN103744716B (zh) | 2014-01-15 | 2014-01-15 | 一种基于当前vcpu调度状态的动态中断均衡映射方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015106497A1 true WO2015106497A1 (zh) | 2015-07-23 |
Family
ID=50501736
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/075253 WO2015106497A1 (zh) | 2014-01-15 | 2014-04-14 | 一种基于当前vcpu调度状态的动态中断均衡映射方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9697041B2 (zh) |
CN (1) | CN103744716B (zh) |
WO (1) | WO2015106497A1 (zh) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10095295B2 (en) * | 2011-12-14 | 2018-10-09 | Advanced Micro Devices, Inc. | Method and apparatus for power management of a graphics processing core in a virtual environment |
CN107003899B (zh) * | 2015-10-28 | 2020-10-23 | 皓创科技(镇江)有限公司 | 一种中断响应方法、装置及基站 |
CN112347013A (zh) * | 2016-04-27 | 2021-02-09 | 华为技术有限公司 | 一种中断处理方法以及相关装置 |
CN106095578B (zh) * | 2016-06-14 | 2019-04-09 | 上海交通大学 | 基于硬件辅助技术和虚拟cpu运行状态的直接中断递交方法 |
CN108255572A (zh) * | 2016-12-29 | 2018-07-06 | 华为技术有限公司 | 一种vcpu切换方法和物理主机 |
US10241944B2 (en) | 2017-02-28 | 2019-03-26 | Vmware, Inc. | Packet processing efficiency based interrupt rate determination |
CN109144679B (zh) * | 2017-06-27 | 2022-03-29 | 华为技术有限公司 | 中断请求的处理方法、装置及虚拟化设备 |
CN108123850B (zh) * | 2017-12-25 | 2020-04-24 | 上海交通大学 | 针对中断持有者抢占问题的综合调度方法及装置 |
US11650851B2 (en) * | 2019-04-01 | 2023-05-16 | Intel Corporation | Edge server CPU with dynamic deterministic scaling |
CN111124608B (zh) * | 2019-12-17 | 2023-03-21 | 上海交通大学 | 一种面向多核虚拟机的精确低延迟中断重定向方法 |
CN112817690B (zh) * | 2021-01-22 | 2022-03-18 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | 一种面向arm架构虚拟化领域的中断虚拟化处理方法及系统 |
CN114579302A (zh) * | 2022-02-23 | 2022-06-03 | 阿里巴巴(中国)有限公司 | 资源调度方法以及装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101354663A (zh) * | 2007-07-25 | 2009-01-28 | 联想(北京)有限公司 | 应用于虚拟机系统的真实cpu资源的调度方法及调度装置 |
CN101382923A (zh) * | 2007-09-06 | 2009-03-11 | 联想(北京)有限公司 | 虚拟机系统及其客户操作系统的中断处理方法 |
US20110202699A1 (en) * | 2010-02-18 | 2011-08-18 | Red Hat, Inc. | Preferred interrupt binding |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7334086B2 (en) * | 2002-10-08 | 2008-02-19 | Rmi Corporation | Advanced processor with system on a chip interconnect technology |
US8819676B2 (en) * | 2007-10-30 | 2014-08-26 | Vmware, Inc. | Transparent memory-mapped emulation of I/O calls |
US9081621B2 (en) * | 2009-11-25 | 2015-07-14 | Microsoft Technology Licensing, Llc | Efficient input/output-aware multi-processor virtual machine scheduling |
US9294557B2 (en) * | 2013-04-19 | 2016-03-22 | International Business Machines Corporation | Hardware level generated interrupts indicating load balancing status for a node in a virtualized computing environment |
US9697031B2 (en) * | 2013-10-31 | 2017-07-04 | Huawei Technologies Co., Ltd. | Method for implementing inter-virtual processor interrupt by writing register data in a single write operation to a virtual register |
-
2014
- 2014-01-15 CN CN201410018108.3A patent/CN103744716B/zh active Active
- 2014-04-14 WO PCT/CN2014/075253 patent/WO2015106497A1/zh active Application Filing
- 2014-04-14 US US14/412,188 patent/US9697041B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101354663A (zh) * | 2007-07-25 | 2009-01-28 | 联想(北京)有限公司 | 应用于虚拟机系统的真实cpu资源的调度方法及调度装置 |
CN101382923A (zh) * | 2007-09-06 | 2009-03-11 | 联想(北京)有限公司 | 虚拟机系统及其客户操作系统的中断处理方法 |
US20110202699A1 (en) * | 2010-02-18 | 2011-08-18 | Red Hat, Inc. | Preferred interrupt binding |
Also Published As
Publication number | Publication date |
---|---|
US20160259664A1 (en) | 2016-09-08 |
CN103744716B (zh) | 2016-09-07 |
CN103744716A (zh) | 2014-04-23 |
US9697041B2 (en) | 2017-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015106497A1 (zh) | 一种基于当前vcpu调度状态的动态中断均衡映射方法 | |
WO2014114060A1 (zh) | 基于当前credit进行预测调度的处理器资源精确分配方法 | |
WO2015020471A1 (en) | Method and apparatus for distributing data in hybrid cloud environment | |
WO2018233370A1 (zh) | 镜像同步方法、系统、设备及计算机可读存储介质 | |
WO2013016979A1 (zh) | 一种soc芯片的验证方法及系统 | |
WO2018076812A1 (zh) | 数据请求的响应方法、装置、存储介质、服务器及系统 | |
WO2015103864A1 (zh) | 内存管理的方法及Linux终端 | |
WO2021060609A1 (ko) | 복수의 엣지와 클라우드를 포함하는 분산 컴퓨팅 시스템 및 이의 적응적 지능 활용을 위한 모델 제공 방법 | |
WO2018205376A1 (zh) | 一种关联信息查询方法、终端、服务器管理系统及计算机可读存储介质 | |
WO2018014567A1 (zh) | 一种提高虚拟机性能的方法、终端、设备及计算机可读存储介质 | |
WO2018076867A1 (zh) | 数据备份的删除方法、装置、系统、存储介质和服务器 | |
WO2016000560A1 (en) | File transmission method, file transmission apparatus, and file transmission system | |
WO2019056733A1 (zh) | 并发量控制方法、应用服务器、系统及存储介质 | |
WO2015120774A1 (en) | Network access method and apparatus applied to mobile application | |
WO2016013906A1 (en) | Electronic apparatus for executing virtual machine and method for executing virtual machine | |
WO2018076433A1 (zh) | 多开应用程序方法、多开应用程序装置及终端 | |
WO2021157934A1 (ko) | 무선 통신 시스템에서 네트워크 슬라이스를 생성하기 위한 장치 및 방법 | |
WO2018090585A1 (zh) | 数据虚拟化存储方法、装置、服务器和存储介质 | |
WO2018076870A1 (zh) | 数据处理方法、装置、存储介质、服务器及数据处理系统 | |
CN112817690B (zh) | 一种面向arm架构虚拟化领域的中断虚拟化处理方法及系统 | |
WO2020218743A1 (en) | Method for controlling execution of application, electronic device and storage medium for the same | |
Singh et al. | Advanced memory reusing mechanism for virtual machines in cloud computing | |
WO2019205272A1 (zh) | 虚拟机服务提供方法、装置、设备及计算机可读存储介质 | |
Xu et al. | Enhancing performance and energy efficiency for hybrid workloads in virtualized cloud environment | |
WO2018196355A1 (zh) | 税优保单凭证生成方法、装置及计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 14412188 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14878669 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 07.12.2016) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14878669 Country of ref document: EP Kind code of ref document: A1 |