WO2015106497A1 - 一种基于当前vcpu调度状态的动态中断均衡映射方法 - Google Patents

一种基于当前vcpu调度状态的动态中断均衡映射方法 Download PDF

Info

Publication number
WO2015106497A1
WO2015106497A1 PCT/CN2014/075253 CN2014075253W WO2015106497A1 WO 2015106497 A1 WO2015106497 A1 WO 2015106497A1 CN 2014075253 W CN2014075253 W CN 2014075253W WO 2015106497 A1 WO2015106497 A1 WO 2015106497A1
Authority
WO
WIPO (PCT)
Prior art keywords
vcpu
interrupt
virtual
active
target
Prior art date
Application number
PCT/CN2014/075253
Other languages
English (en)
French (fr)
Inventor
管海兵
李健
马汝辉
朱敏君
周凡夫
Original Assignee
上海交通大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海交通大学 filed Critical 上海交通大学
Priority to US14/412,188 priority Critical patent/US9697041B2/en
Publication of WO2015106497A1 publication Critical patent/WO2015106497A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/24Handling requests for interconnection or transfer for access to input/output bus using interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/461Saving or restoring of program or task context
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage

Definitions

  • the present invention relates to the field of computer system virtualization, virtual machine interrupt processing, and virtual machine scheduler. Specifically, the present invention relates to a dynamic interrupt equalization mapping method based on a current VCPU scheduling state.
  • Virtualization technology usually integrates the computing or storage functions that need to be implemented by multiple physical devices into a relatively powerful physical server, thereby realizing the integration and redistribution of hardware resources and improving the utilization of hardware devices.
  • Cloud computing and data center construction play a very important role.
  • Virtual machines have many obvious advantages over real physical devices.
  • cloud computing and virtualization technologies allow enterprise users to use virtual machines for jobs, so users do not need to purchase a real-world IT device, effectively reducing the economic and management costs of enterprise users in maintaining their IT infrastructure.
  • the mirror-based virtual machine creation method is flexible and practical.
  • the image backup technology can handle server disaster recovery, reconstruction, and batch replication.
  • VMM virtual machine monitor
  • the virtual machine management mode of Monitor and VMM can flexibly handle the mapping relationship between physical resources and virtual resources from the hardware and software level, and provide a series of necessary functions such as performance isolation, security isolation, and status monitoring.
  • a virtual machine monitor is a software management layer that exists between hardware and a traditional operating system. Its main function is to manage real physical devices, such as physical CPUs, memory, etc., and abstract the underlying hardware into corresponding virtual device interfaces. Enables multiple operating systems to get the virtual hardware they need so they can run on the same physical device at the same time.
  • the virtual machine monitor introduced between the physical device and the virtual operating system acts as an intermediate layer, which inevitably affects the performance of the virtual operating system.
  • One important aspect is to increase the response delay of the interrupt.
  • the physical CPU needs to be time-multiplexed.
  • the waiting time of the VCPU in the VMM scheduling queue is reflected in the response time of the VCPU to the event, which causes the response delay of events such as interrupts.
  • the response delay of events such as interrupts.
  • the virtual interrupt processing device is mainly used to handle interrupts in the virtual machine.
  • virtual machines are divided into two types according to virtualization, namely full virtualization and paravirtualization.
  • the virtual operating system does not need any modification, XEN implements virtual interrupt processing through the virtual interrupt processing platform; in paravirtualization, the virtual operating system kernel needs to be modified to adapt to the host operating system.
  • XEN implements interrupt and event processing through the event channel mechanism.
  • KVM does not have the distinction between full virtualization and paravirtualization. Virtual machine interrupts are handled in KVM through virtual interrupt processing devices.
  • each VCPU corresponds to a virtual local APIC (Advanced Programmable Interrupt Controller) due to reception interruption.
  • Virtual platform also includes virtual I/O Virtual devices such as APIC and virtual PIC send interrupts.
  • virtual I/O APIC, virtual local APIC, and virtual PIC are all software entities maintained by VMM.
  • the virtual device of the SMP architecture virtual machine such as a virtual network card, invokes the virtual I/O APCI to issue an interrupt request, virtual I/O.
  • the APIC selects a VCPU as the receiver of the interrupt according to the mapping relationship between the interrupt and the VCPU, and sends the interrupt request to the virtual local APIC of the target VCPU.
  • the virtual local APIC further utilizes the event injection mechanism of VT-x to finally complete the injection of virtual interrupts.
  • each VCPU performs time-division multiplexing on the physical CPU under the scheduler scheduling, and then some VCPUs are active and some VCPUs are in a queued state.
  • Existing virtual machine interrupt processing technology when virtual I / O
  • the APIC needs to map an interrupt request to the VCPU, it does not consider the current scheduling state of the VCPU and blindly allocates the VCPU for the interrupt.
  • the virtual interrupt is allocated to the VCPU in the scheduling wait state, the waiting delay of the VCPU in the scheduling queue becomes part of the response delay of the interrupt, thereby greatly improving the response delay of the interrupt request and reducing the processing rate of the interrupt request. .
  • the technical problem to be solved by the present invention is to provide a dynamic interrupt equalization mapping method based on the current VCPU scheduling state and interrupt processing load analysis, and consider the VCPU as the target of the interrupt mapping while interrupting the mapping.
  • the scheduling state of the structure in the VMM scheduler takes into account the interrupt load balancing between the various VCPUs, thereby effectively reducing the interrupt processing delay.
  • the present invention provides a dynamic interrupt equalization mapping method based on current VCPU scheduling state and interrupt processing load analysis, which is characterized in that: virtual I/O of SMP virtual machine After receiving a virtual interrupt, the APIC needs to map the virtual interrupt to a VCPU of the virtual machine, and analyzes the VCPU in the active state according to the scheduling state of all VCPUs of the current VM in the VMM scheduler, and The virtual interrupt is mapped into the active VCPU for a lower interrupt processing delay. If multiple VCPUs are active at the same time, the interrupt load of each active VCPU is further considered, and the interrupt is mapped to the active VCPU with the lowest current load to achieve VCPU interrupt load balancing.
  • the invention provides a dynamic interrupt equalization mapping method based on a current VCPU scheduling state, which is characterized in that the method comprises the following steps:
  • Virtual I/O whenever virtual hardware generates a virtual interrupt APIC receives the virtual interrupt and transmits the virtual interrupt to the virtual local APIC of the target VCPU;
  • the interrupt equalization allocator intercepts the virtual interrupt before the virtual interrupt is transferred to the virtual local APIC of the target VCPU;
  • the interrupt equalization allocator looks at the scheduling status information provided by the scheduler, and obtains the active VCPU list;
  • the interrupt equalization allocator reselects the target VCPU according to the active VCPU list
  • the interrupt equalization allocator transfers the virtual interrupt to the virtual local APIC of the target VCPU reselected in step (3);
  • the virtual local APIC of the target VCPU performs virtual interrupt injection to the target VCPU.
  • step (3) includes the steps of:
  • the target VCPU is selected according to the scheduling status information; if the number is 1, the VCPU in the active VCPU list is the target VCPU; if the number is greater than 1, according to the active state The interrupt load of the VCPU in the VCPU list is selected as the target VCPU.
  • the VCPU When there is only one VCPU in the active VCPU list, the VCPU is selected as the target VCPU, and the virtual interrupt is mapped into the active VCPU to obtain a lower interrupt processing delay.
  • the method for selecting the target VCPU includes the following steps:
  • the VCPU that predicts the fastest entry into the active state is selected as the target VCPU, and the virtual interrupt is mapped to the active VCPU to obtain lower interrupt processing. Delay.
  • the basis for predicting the VCPU that is the fastest to enter the active state in step (332) is that the VCPU is in an idle state.
  • the VCPU in idle state may enter the active state as soon as possible. This is because the CREDIT scheduler is designed to allow the VCPU in the IDLE state to preempt when an acknowledgement event is required. Therefore, the VCPU selected in the idle state is the target VCPU, and the virtual interrupt is mapped to the In the VCPU, get a lower interrupt processing delay.
  • the VCPU that predicts the fastest entry into the active state in step (332) is in the waiting queue position and the remaining credit value.
  • the location of their respective waiting queues is prioritized, and the closer to the head of the VCPU, the more likely it is to enter the active state. Then, when there is a VCPU of the same rank, consider the remaining credit value of the VCPU, and select the remaining credit value as the target VCPU to obtain a lower interrupt processing delay.
  • step (32) according to the interrupt load of the VCPU in the active VCPU list, the method for selecting the target VCPU includes the steps of:
  • the virtual interrupt is mapped to the active VCPU with the lowest interrupt load, and the VCPU interrupt load balancing is realized while obtaining a lower interrupt processing delay.
  • the virtual hardware in step (1) includes a physical device that is processed for the virtual device and/or via the VMM.
  • step (5) the virtual interrupt injection is completed by the event injection mechanism of VT-x.
  • a member variable sched_info is added on the basis of the virtual machine SHARED_INFO structure for recording the scheduling state of the VCPU.
  • the swapped VCPU and the swapped VCPU complete the context exchange, the swapped VCPU becomes the active VCPU, and the sched_info member variable of the virtual machine SHARED_INFO structure to which the active VCPU belongs is marked. Active; the sched_info member variable of the SHARED_INFO structure of the virtual machine to which the VCPU is swapped is marked as waiting.
  • the dynamic interrupt mapping method based on the current VCPU active state provided by the present invention has the following beneficial technical effects:
  • the virtual interrupt is mapped to the active VCPU, which effectively reduces the interrupt request processing delay caused by the scheduling delay, thereby ensuring that the virtual interrupt is timely injected into the target VCPU, and the lower is obtained.
  • the interrupt processing delay increases the interrupt response speed.
  • the VCPU with the smallest interrupt load is selected as the target VCPU, further ensuring the load balance between the VCPUs, so that the load of the VCPU under the SMP structure is more symmetrical, thus promoting the SMP structure.
  • the overall performance of each VCPU is balanced.
  • the present invention effectively reduces the interrupt response delay while The waste of introducing additional computing power is not conducive to the operation of computationally intensive loads, ensuring high CPU utilization.
  • FIG. 1 is a schematic diagram of an interrupt processing structure on a physical platform
  • FIG. 2 is a schematic structural diagram of a virtual interrupt processor in an existing SMP virtual machine technology
  • FIG. 3 is a schematic diagram of a virtual interrupt handler in accordance with an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a process of establishing a virtual interrupt map of the virtual interrupt processor shown in FIG. 3.
  • FIG. 1 is a schematic diagram of an interrupt processing structure on a physical platform.
  • the I/O device first passes the interrupt controller (I/O APIC Or PIC) issues an interrupt request, which is sent to the system bus via the PCI bus.
  • the local APIC component of the target CPU receives the interrupt and the target CPU begins processing the interrupt.
  • FIG. 2 is a schematic structural diagram of a virtual interrupt processor in the existing SMP virtual machine technology.
  • the VMM also needs to present a virtual interrupt architecture similar to the physical interrupt architecture for the guest operating system.
  • each VCPU also corresponds to a virtual local.
  • the APIC is used to receive interrupts, while the VMM provides a virtual I/O APIC for sending interrupts.
  • a virtual device such as a virtual network card
  • the virtual device first calls the virtual I/O.
  • the interface provided by APIC sends an interrupt.
  • Virtual I/O Based on the obtained interrupt request, the APIC selects a VCPU as the processor of the interrupt and hands the interrupt to the local APIC corresponding to the VCPU. Finally, the virtual local APIC uses the event injection mechanism of VT-x to inject the interrupt into the corresponding destination VCPU, and the VCPU handles the interrupt. Similar to VCPU, virtual local APIC, virtual I/O involved in virtual interrupt processor APIC, virtual PIC, etc. are also software entities maintained by VMM.
  • FIG. 3 is a schematic diagram of a virtual interrupt handler in accordance with one embodiment of the present invention.
  • the interrupt equalization allocator is added to the virtual interrupt processor in this embodiment, and the interrupt equalization allocator is in the original virtual I/O. Between APIC and virtual local APIC.
  • the interrupt equalization allocator reads the basic scheduling information of the VCPU from the scheduler of the VMM, and analyzes the scheduling state of the VCPU of the virtual machine at the current moment, that is, which VCPUs are in an active state and which VCPUs are in a scheduling wait state.
  • the interrupt equalization allocator intercepts the interrupt request and replaces the virtual I/O according to the existing knowledge.
  • the APIC chooses to process the interrupted target VCPU, which in turn sends the interrupt to the virtual local APIC of the target VCPU, initiating interrupt injection.
  • FIG. 4 is a schematic diagram of a process of establishing a virtual interrupt map of the virtual interrupt processor shown in FIG. 3. The figure illustrates the working process of the interrupt equalization allocator.
  • the interrupt equalization allocator first analyzes the scheduling and interrupt load conditions of the respective VCPUs of the current virtual machine based on the scheduling information obtained from the scheduler.
  • the VCPU is recorded as the target VCPU.
  • the interrupt equalization allocator speculates on the possibility that each VCPU enters an active state, and the selected one can enter the active state at the fastest.
  • the VCPU acts as the target VCPU. The guess is based on information provided by the scheduler, such as whether the VCPU is in the IDLE state, the remaining credit value of each VCPU under the CREDIT scheduler, and so on.
  • the VCPU in the IDLE state is the most appropriate target choice because the CREDIT scheduler design allows the VCPU in the IDLE state to preempt when a response event is needed.
  • the location of their respective waiting queues is prioritized, and the closer to the head of the VCPU, the more likely it is to enter the active state. Then, when there is a VCPU of the same rank, consider the remaining credit value of the VCPU, and select the remaining credit value as the target VCPU to obtain a lower interrupt processing delay.
  • the interrupt equalization allocator sequentially reads the interrupt load of each active VCPU, and selects the VCPU of the minimum load as the target VCPU. After determining the target VCPU, the interrupt equalization allocator continues to interrupt the processing flow instead of virtual I/O. APIC initiates interrupt injection.
  • Step 1 When the VCPU scheduler of the VMM has a valid VCPU switch, after the VCPU that is replaced and the VCPU that is swapped out complete the context exchange, the VCPU that is replaced becomes the active VCPU and actually runs on the physical CPU.
  • the VCPU status that is swapped in is marked as active according to the ID of the VCPU, and the status of the VCPU that is swapped out is marked as waiting.
  • Step 2 When the virtual device generates a virtual interrupt to be processed, the virtual interrupt is virtual I/O.
  • the APIC intercepts the virtual interrupt before mapping to the VCPU, and simultaneously looks at the VCPU state table of the current virtual machine to obtain a list of VCPUs that are active. If there is only one VCPU in the active VCPU list, select it as the target VCPU and go to step 5. If there is no VCPU in the active state at the current moment, go to step 3. If more than one VCPU is active, go to step 4.
  • Step 3 If the VCPU is not active at the current time, the VCPU that can enter the active state is estimated according to the VCPU scheduling information provided by the scheduler, and is selected as the target VCPU, and the process proceeds to step 5.
  • Step 4 When there are multiple VCPUs in an active state, the VCPU interrupt load table maintained by the current virtual machine structure is further read, the current interrupt load of each active VCPU is compared, and the VCPU with the relatively smallest load is selected as the target VCPU.
  • Step 5 Update the interrupt load state of the target VCPU, and map the current virtual interrupt to the dynamically selected target VCPU, and finally continue to complete the interrupt injection.
  • the specific implementation steps of the interrupt mapping scheme of the present invention are mainly divided into two parts, and the first part of the steps are as follows:
  • Step 1 Add a new member variable sched_info based on the SHARED_INFO structure of the original virtual machine to record the scheduling state of the VCPU.
  • the main code for this step is implemented as follows:
  • Step 2 When the VCPU scheduler of the VMM has a valid VCPU switch, after the VCPU that is replaced and the VCPU that is swapped out complete the context exchange, the VCPU that is replaced becomes the active VCPU, that is, actually runs on the physical CPU. At this time, the virtual machine structure to which the VCPU belongs marks the VCPU status that is swapped in as active according to the ID of the VCPU, and marks the status of the VCPU that is swapped out as waiting.
  • the main code for this step is implemented as follows:
  • Prev->domain->shared_info->native.sched_info ((prev->domain->shared_info->native.sched_info>>(prev->vcpu_id-1))-1) ⁇ (prev->vcpu_id-1 );
  • the second part of the specific implementation steps of the present invention modifies the existing virtual I/O APIC's interrupt mapping scheme for virtual interrupts, the specific steps are as follows:
  • Step 1 Virtual I/O whenever the VMM generates a virtual interrupt
  • the APIC intercepts the virtual interrupt before the virtual interrupt is mapped to the VCPU, and simultaneously looks at the VCPU state table of the current virtual machine to obtain a list of VCPUs that are active. If there is only one VCPU in the active VCPU list, select it as the target VCPU and go to step 4. If there is no VCPU active at the current time, go to step 2. If more than one VCPU is active, go to step 3.
  • Step 2 If the VCPU is not active at the current time, the VCPU that can enter the active state is estimated according to the VCPU scheduling information provided by the scheduler, and is selected as the target VCPU, and the process proceeds to step 4.
  • Step 3 When there are multiple active VCPUs, further read the VCPU interrupt load table maintained by the current virtual machine structure, compare the current interrupt load of each active VCPU, and select the VCPU with the least load as the target VCPU. 4.
  • Step 4 Update the interrupt load state of the target VCPU, and map the current virtual interrupt to the dynamically selected target VCPU, and continue to complete the interrupt injection.
  • the core code of this part is implemented as follows:
  • Target target_search()
  • Target_vcpu most_likely_active(d);
  • Tmp tmp>>1;
  • Target_vcpu target_vcpu+1
  • Tmp tmp>>1;
  • Target_vcpu target_vcpu+1
  • Target_vcpu min_load_vcpu(saved_active_vcpu()) ;
  • the dynamic interrupt mapping method based on the current VCPU active state maps the virtual interrupt to the active VCPU according to the scheduling state of the current VCPU, thereby effectively reducing the interrupt request processing delay caused by the scheduling delay, thereby ensuring
  • the virtual interrupt is injected into the target VCPU in time to obtain a lower interrupt processing delay and improve the interrupt response speed.
  • the VCPU with the smallest interrupt load is selected as the target VCPU, further ensuring the VCPU between the VCPUs.
  • the interrupt processing load balancing makes the load of the VCPU under the SMP structure more symmetrical, thereby promoting the balanced performance of the overall performance of each VCPU under the SMP structure.
  • the present invention effectively reduces the interrupt response delay while introducing no additional
  • the waste of computing power is conducive to the operation of computationally intensive load, ensuring high CPU utilization; since interrupts play an important role in the operating system, increasing the interrupt processing rate enables the overall responsiveness of the virtual machine to be obtained. A certain degree of improvement.

Abstract

本发明公开了基于当前VCPU调度状态的动态中断均衡映射方法,当SMP虚拟机的虚拟I/O APIC接收到一个虚拟中断后,需要将该虚拟中断映射给虚拟机的某一个VCPU时,根据当前VM的所有VCPU在VMM调度器中的调度状态,分析出处于活跃状态的部分VCPU,并且将该虚拟中断映射到活跃VCPU中,以获得较低的中断处理延时。如果同时有多个VCPU处于活跃状态,则进一步考虑各个活跃VCPU的中断负载,并且将该中断映射给当前负载最低的活跃VCPU,进一步保证各个VCPU之间的中断处理负载均衡,使得SMP结构下VCPU的负载更加对称,从而促进SMP结构下各个VCPU整体性能的均衡发挥。

Description

一种基于当前VCPU调度状态的动态中断均衡映射方法
技术领域
本发明涉及计算机系统虚拟化、虚拟机中断处理、虚拟机调度器领域,具体地,涉及一种基于当前VCPU调度状态的动态中断均衡映射方法。
背景技术
虚拟化技术通常将原本需要多台物理设备实现的计算或存储功能整合到一台功能相对强大的物理服务器中去,从而实现了硬件资源的整合与再分配,提高了硬件设备的利用率,在云计算以及数据中心构建中发挥着非常重要的作用。虚拟机相对于真实的物理设备,体现出很多明显的优势。首先,云计算与虚拟化技术允许企业用户使用虚拟机进行作业,从而用户不需要购置一套真实的IT设备,有效减少了企业用户在维护IT基础架构上的经济成本和管理成本。第二,基于镜像的虚拟机创建方式灵活实用,通过镜像备份技术能够很好地处理服务器灾备、重建、批量复制等问题。此外,基于虚拟机监视器(Virtual Machine Monitor,VMM)的虚拟机管理方式能够从软硬件的层面灵活处理物理资源和虚拟资源的映射关系,提供性能隔离、安全隔离、状态监控等一系列必要功能。
虚拟机监视器是指存在于硬件和传统操作系统之间的一个软件管理层,其主要作用是管理真实的物理设备,例如物理CPU、内存等,并将底层的硬件抽象为对应的虚拟设备接口,使多个操作系统得到各自所需的虚拟硬件,从而使他们能够同时在同一个物理设备上运行。
物理设备和虚拟操作系统之间引入的虚拟机监视器作为中间层,不可避免地使得虚拟操作系统的性能受到一定的影响,其中一个重要的方面是增加了中断的响应延时。多个VCPU共享一个物理CPU时,需要对物理CPU进行分时复用,VCPU在VMM的调度队列中的等待时间会反映到该VCPU对事件的响应时间中,进而使得中断等事件的响应延时严重增加。
学术界目前对于虚拟中断延时问题的研究,主要集中在单核虚拟机上。在单核虚拟机中,由于一个虚拟机只持有一个VCPU,对中断延时的解决方法多采用减少物理CPU分时复用的时间片长度、提高VCPU之间的切换频率以及增加抢占机会等手段。这些方法在一定程度上能够降低调度延时,但同时也不可避免地增加了VCPU上下文切换的频率,引入了额外的系统开销,从而浪费了CPU的计算能力。对于对称式多处理器(Symmetric Multi-Processor,SMP)架构的虚拟机,目前学术界的研究则多集中在VCPU协同调度的问题上,即虚拟机中多个VCPU之间的同步和通信问题,对中断等事件延时问题的研究则相对较少。
目前,在虚拟机监视器的实例,例如KVM与XEN的实现中,主要采用虚拟中断处理设备的方式来处理虚拟机中的中断。在XEN中,虚拟机根据虚拟化的方式分成两种类型,即全虚拟化和半虚拟化。在全虚拟化模式下,虚拟操作系统不需要经过任何修改,XEN通过虚拟中断处理平台来实现虚拟中断处理;而在半虚拟化中,虚拟操作系统的内核需要被修改以适应宿主机的操作系统,XEN通过事件通道机制实现中断和事件处理。KVM则没有全虚拟化和半虚拟化的区别,虚拟机的中断在KVM中都是通过虚拟中断处理设备来处理的。
在虚拟机的中断架构中,和物理平台一样,每一个VCPU都对应了一个虚拟本地APIC(Advanced Programmable Interrupt Controller)由于接收中断。虚拟平台也包含了虚拟I/O APIC、虚拟PIC等虚拟设备来发送中断。和VCPU一样,虚拟I/O APIC、虚拟本地APIC、虚拟PIC都是VMM维护的软件实体。通常SMP架构虚拟机的虚拟设备,例如虚拟网卡等调用虚拟I/O APCI发出中断请求,虚拟I/O APIC根据中断和VCPU的映射关系选择一个VCPU作为中断的接收者,将中断请求发送到该目标VCPU的虚拟本地APIC中。虚拟本地APIC进一步利用VT-x的事件注入机制最终完成虚拟中断的注入。
需要指出的是,在多VCPU架构的虚拟机中,各个VCPU在调度器的调度下对物理CPU进行分时复用,进而出现部分VCPU活跃、部分VCPU处于排队状态的局面。现有的虚拟机中断处理技术中,当虚拟I/O APIC需要将中断请求映射到VCPU中时,没有考虑VCPU当前的调度状态,盲目地为中断分配VCPU。虚拟中断被分配给处于调度等待状态的VCPU时,VCPU在调度队列中的等待延时成为了该中断的响应延时的一部分,从而大大提高了中断请求的响应延迟,降低了中断请求的处理速率。
发明内容
鉴于现有技术的上述缺陷,本发明所要解决的技术问题是提供一种基于当前VCPU调度状态和中断处理负载分析的动态中断均衡映射方法,在中断映射的同时既考虑了VCPU作为中断映射的目标结构在VMM的调度器中的调度状态,又考虑了各个VCPU之间的中断负载均衡,从而有效地降低了中断处理延时。
为实现上述目的,本发明提供了一种基于当前VCPU调度状态和中断处理负载分析的动态中断均衡映射方法,其特征在于:当SMP虚拟机的虚拟I/O APIC接收到一个虚拟中断后,需要将该虚拟中断映射给虚拟机的某一个VCPU时,根据当前VM的所有VCPU在VMM调度器中的调度状态,分析出处于活跃状态的部分VCPU,并且将该虚拟中断映射到活跃VCPU中,以获得较低的中断处理延时。如果同时有多个VCPU处于活跃状态,则进一步考虑各个活跃VCPU的中断负载,并且将该中断映射给当前负载最低的活跃VCPU,以实现VCPU中断负载平衡。
本发明提供一种基于当前VCPU调度状态的动态中断均衡映射方法,其特征在于,方法包括以下步骤:
(1)每当虚拟硬件产生一个虚拟中断,虚拟I/O APIC接收虚拟中断,并将虚拟中断传送到目标VCPU的虚拟本地APIC;
(2)中断均衡分配器在虚拟中断被传送到目标VCPU的虚拟本地APIC之前拦截虚拟中断;
(3)中断均衡分配器查看调度器提供的调度状态信息,获得处于活跃状态的VCPU列表;
(4)中断均衡分配器根据处于活跃状态的VCPU列表,重新选定目标VCPU;
(5)中断均衡分配器将虚拟中断传送到步骤(3)中重新选定的目标VCPU的虚拟本地APIC;
(6)目标VCPU的虚拟本地APIC向目标VCPU进行虚拟中断注入。
进一步地,步骤(3)中选定目标VCPU的方法包括步骤:
(31)获得处于活跃状态的VCPU列表中VCPU的个数;
(32)如果个数为0,根据调度状态信息,选定目标VCPU;如果个数为1,选定处于活跃状态的VCPU列表中VCPU为目标VCPU;如果个数大于1,根据处于活跃状态的VCPU列表中VCPU的中断负载,选定目标VCPU。
当处于活跃状态的VCPU列表中只有一个VCPU,选定该VCPU为目标VCPU,将虚拟中断映射到该活跃VCPU中,以获得较低的中断处理延时。
进一步地,步骤(32)中根据调度状态信息,选定目标VCPU的方法包括步骤:
(321)读取调度状态信息;
(332)选定所有VCPU中预测最快进入活跃状态的VCPU为目标VCPU。
当处于活跃状态的VCPU列表中没有VCPU,也就是没有VCPU处于活跃状态,选定预测最快进入活跃状态的VCPU为目标VCPU,将虚拟中断映射到该活跃VCPU中,以获得较低的中断处理延时。
进一步地,步骤(332)中预测最快进入活跃状态的VCPU的依据是VCPU处于空闲状态。
处于空闲状态的VCPU可能最快进入活跃状态,这是因为CREDIT调度器设计允许IDLE状态的VCPU在需要应答事件时进行抢占,因此选定处于空闲状态的VCPU为目标VCPU,将虚拟中断映射到该VCPU中,以获得较低的中断处理延时。
进一步地,如果所述VCPU均未处于空闲状态,步骤(332)中预测最快进入活跃状态的VCPU的依据VCPU在等待队列中位置与剩余信用值。
对于所有的VCPU,优先考虑其在各自的等待队列中所处的位置,越是靠近队首的VCPU就越有可能最快进入活跃状态。然后,当有相同排位的VCPU时,再考虑VCPU的剩余信用值,选择剩余信用值大的作为目标VCPU,以获得较低的中断处理延时。
进一步地,步骤(32)中根据处于活跃状态的VCPU列表中VCPU的中断负载,选定目标VCPU的方法包括步骤:
(323)读取当前虚拟机结构体维护的VCPU中断负载表;
(334)比较处于活跃状态的VCPU列表中VCPU的中断负载,选定中断负载最小的VCPU为目标VCPU。
当处于活跃状态的VCPU列表中有多个VCPU,将虚拟中断映射给当前中断负载最低的活跃VCPU,在获得较低的中断处理延时的同时实现VCPU中断负载平衡。
进一步地,步骤(1)中虚拟硬件包括为虚拟设备和/或经VMM中断处理的物理设备。
进一步地,步骤(5)中通过VT-x的事件注入机制完成虚拟中断注入。
进一步地,在虚拟机SHARED_INFO结构的基础上增加成员变量sched_info,用于记录VCPU的调度状态。
进一步地,每当调度器发生一次VCPU切换,被换入的VCPU和被换出的VCPU完成上下文交换后,被换入的VCPU成为活跃VCPU,活跃VCPU所属的虚拟机SHARED_INFO结构的sched_info成员变量标记为活跃;被换出的VCPU的所属的虚拟机SHARED_INFO结构的sched_info成员变量标记为等待。
与现有技术相比,本发明提供的基于当前VCPU活跃状态的动态中断映射方法具有以下有益的技术效果:
(1)根据当前VCPU的调度状态,将虚拟中断映射到活跃的VCPU,有效减少了因为调度延时而产生的中断请求处理延时,从而保证了虚拟中断及时注入目标VCPU中,获得较低的中断处理延时,提高了中断响应速度。
(2)当多个VCPU处于活跃状态时,选定中断负载最小的VCPU为目标VCPU,进一步保证各个VCPU之间的中断处理负载均衡,使得SMP结构下VCPU的负载更加对称,从而促进SMP结构下各个VCPU整体性能的均衡发挥。
(3)由于本发明的方案并没有要求减小调度周期的时间片长度、增加VCPU上下文切换频率或在VCPU之间引入新的抢占机制,因此,本发明在有效减少中断响应延时的同时,没有引入额外的计算能力的浪费,有利于计算密集型负载的运行,保证了较高的CPU利用率。
(4)由于中断在操作系统中具有很重要的作用,提高中断处理速率能够使得虚拟机的整体响应能力也得到一定程度的提高。
附图说明
图1是物理平台上的中断处理结构示意图;
图2是现有的SMP虚拟机技术中虚拟中断处理器的结构示意图;
图3是本发明的一个实施例的虚拟中断处理器的示意图;
图4是图3所示的虚拟中断处理器的虚拟中断映射的建立过程示意图。
具体实施方式
以下将结合附图对本发明的构思、具体结构及产生的技术效果作进一步说明,以充分地了解本发明的目的、特征和效果。
图1是物理平台上的中断处理结构示意图。在物理平台上,I/O设备首先通过中断控制器(I/O APIC 或者PIC)发出中断请求,这个中断请求通过PCI总线发送到系统总线上,最后目标CPU的本地 APIC部件接收此中断,目标CPU开始处理该中断。
图2是现有的SMP虚拟机技术中虚拟中断处理器的结构示意图。在虚拟机的环境下,VMM也需要为客户机操作系统呈现一个与物理中断架构类似的虚拟中断架构。和在真实的物理平台上一样,在虚拟SMP架构下,每一个VCPU也对应了一个虚拟本地 APIC用于接收中断,而VMM则提供虚拟I/O APIC用于发送中断。当某个虚拟设备,例如虚拟网卡等需要发送一个中断时,虚拟设备首先调用虚拟I/O APIC提供的接口发送中断。虚拟I/O APIC根据得到的中断请求,选择一个VCPU作为该中断的处理者,将中断交给该VCPU对应的本地APIC。最后,虚拟本地APIC利用VT-x的事件注入机制将中断注入到对应的目的VCPU中,由VCPU处理中断。类似于VCPU,虚拟中断处理器中涉及到的虚拟本地APIC、虚拟I/O APIC、虚拟PIC等也都是VMM维护的软件实体。
图3是本发明的一个实施例的虚拟中断处理器的示意图。与现有的虚拟中断处理器相比,本实施例中的虚拟中断处理器中增加了中断均衡分配器,中断均衡分配器处于原有的虚拟I/O APIC与虚拟本地APIC之间。
中断均衡分配器从VMM的调度器中读取到VCPU的基本调度信息,分析出当前时刻虚拟机的VCPU的调度状态,即哪些VCPU处于活跃状态,哪些VCPU处于调度等待状态。当虚拟I/O APIC需要发送一个中断请求时,中断均衡分配器截获此中断请求,根据已有的知识代替虚拟I/O APIC选择处理中断的目标VCPU,进而将该中断发送至目标VCPU的虚拟本地APIC,发起中断注入。
图4是图3所示的虚拟中断处理器的虚拟中断映射的建立过程示意图。图中具体说明了中断均衡分配器的工作过程。当中断均衡分配器需要处理一个中断请求时,中断均衡分配器首先基于从调度器得到的调度信息来分析当前虚拟机的各个VCPU的调度和中断负载情况。
若当前只存在唯一的活跃VCPU,即该虚拟机的VCPU中只有一个VCPU目前取得了实际的运行权,在物理CPU上运行,则将此VCPU作为目标VCPU记录下来。
若当前没有VCPU处于活跃状态,即虚拟机的所有VCPU均处在调度器的等待队列中,则中断均衡分配器对各个VCPU进入活跃状态的可能性进行推测,选定最快能够进入活跃状态的VCPU作为目标VCPU。推测的依据来自调度器提供的信息,例如VCPU是否处于IDLE状态,CREDIT调度器下各个VCPU的剩余信用值等。一般来说,在CREDIT调度器中,处于IDLE状态的VCPU是最合适的目标选择,因为CREDIT调度器设计允许IDLE状态的VCPU在需要应答事件时进行抢占。对于所有的VCPU,优先考虑其在各自的等待队列中所处的位置,越是靠近队首的VCPU就越有可能最快进入活跃状态。然后,当有相同排位的VCPU时,再考虑VCPU的剩余信用值,选择剩余信用值大的作为目标VCPU,以获得较低的中断处理延时。
若当前有多个VCPU处于活跃状态,中断均衡分配器依次读取各个处于活跃状态的VCPU的中断负载情况,选取最小负载的VCPU作为目标VCPU。确定目标VCPU之后,中断均衡分配器继续中断处理流程,代替虚拟I/O APIC发起中断注入。
本发明基于当前VCPU活跃状态和VCPU中断处理负载分析进行中断均衡分配的动态中断映射方法具体步骤如下:
步骤1、每当VMM的VCPU调度器发生一次有效的VCPU切换,被换入的VCPU和被换出的VCPU完成上下文交换后,被换入的VCPU成为活跃VCPU,真正在物理CPU上运行,此时该VCPU所属的虚拟机结构体根据VCPU的ID将被换入的VCPU状态标记为活跃,同时把被换出的VCPU的状态标记为等待。
步骤2、每当虚拟设备产生一个虚拟中断需要处理时,在该虚拟中断被虚拟I/O APIC映射到VCPU之前拦截该虚拟中断,同时查看当前虚拟机的VCPU状态表,得出处于活跃状态的VCPU列表。若活跃VCPU列表中只有一个VCPU,则选中其为目标VCPU,转步骤5;若当前时刻没有VCPU处于活跃状态,转步骤3;若有多于一个VCPU处于活跃状态,转步骤4。
步骤3、当前时刻没有VCPU处于活跃状态,则根据调度器提供的各个VCPU调度信息推测最早能够进入活跃状态的VCPU,选中其为目标VCPU,转步骤5.
步骤4、当存在多个处于活跃状态的VCPU时,则进一步读取当前虚拟机结构体维护的VCPU中断负载表,比较各个活跃VCPU的当前中断负载,选取负载相对最小的VCPU作为目标VCPU。
步骤5、更新目标VCPU的中断负载状态,并且将当前虚拟中断映射到动态选取的目标VCPU中,最后继续完成中断注入。
具体地,本发明的中断映射方案具体实现步骤主要分为两部分,第一部分步骤如下:
步骤1、在原有虚拟机的SHARED_INFO结构的基础上增加一个新的成员变量sched_info,用于记录VCPU的调度状态。该步骤的主要代码实现如下:
struct shared_info {
uint64_t sched_info;
}
步骤2、每当VMM的VCPU调度器发生一次有效的VCPU切换,被换入的VCPU和被换出的VCPU完成上下文交换后,被换入的VCPU成为活跃VCPU,即真正在物理CPU上运行,此时该VCPU所属的虚拟机结构体根据VCPU的ID将被换入的VCPU状态标记为活跃,同时把被换出的VCPU的状态标记为等待。该步骤的主要代码实现如下:
static void schedule(void) ()
{
context_switch(prev, next);
if (check_pre_ok(prev, next))
{
prev->domain->shared_info->native.sched_info=((prev->domain->shared_info->native.sched_info>>(prev->vcpu_id-1))-1)<<(prev->vcpu_id-1);
next->domain->shared_info->native.sched_info=((next->domain->shared_info->native.sched_info>>(next->vcpu_id-1))+1)<<(next->vcpu_id-1);
}
context_switch(prev, next);
}
本发明的具体实现步骤的第二部分修改现有虚拟I/O APIC对虚拟中断的中断映射方案,具体步骤如下:
步骤1、每当VMM产生一个虚拟中断时,虚拟I/O APIC在该虚拟中断被映射到VCPU之前拦截该虚拟中断,同时查看当前虚拟机的VCPU状态表,得出处于活跃状态的VCPU列表。若当前活跃VCPU列表中只有一个VCPU,则选中其为目标VCPU,转步骤4;若当前时刻没有VCPU处于活跃状态,转步骤2;若有多于一个VCPU处于活跃状态,转步骤3。
步骤2、当前时刻没有VCPU处于活跃状态,则根据调度器提供的各个VCPU调度信息推测最早能够进入活跃状态的VCPU,选中其为目标VCPU,转步骤4.
步骤3、当存在多个处于活跃状态的VCPU时,进一步读取当前虚拟机结构体维护的VCPU中断负载表,比较各个活跃VCPU的当前中断负载,选取负载相对最小的VCPU作为目标VCPU,转步骤4。
步骤4、更新目标VCPU的中断负载状态,并且将当前虚拟中断映射到动态选取的目标VCPU中,继续完成中断注入。
该部分的核心代码实现如下:
static void vioapic_deliver(struct hvm_hw_vioapic *vioapic, int irq)
{
target = target_search();
ioapic_inj_irq(vioapic, target, vector, trig_mode, delivery_mode);
}
struct vlapic *target_search(struct domain *d)
{
int target_vcpu=0;
uint64_t tmp= d->vcpu[d->shared_info->native. sched_info;
if (tmp==0)
{
target_vcpu = most_likely_active(d);
return vcpu_vlapic(target_vcpu);
}
while ((tmp&1)!=1)
{
tmp = tmp>>1;
target_vcpu = target_vcpu+1;
}
if (tmp==0)
{
return vcpu_vlapic(target_vcpu); }
while (tmp!=0)
{
if ((tmp&1)==1)
saved_active_vcpu(target_vcpu);
tmp = tmp>>1;
target_vcpu = target_vcpu+1;
}
target_vcpu = min_load_vcpu(saved_active_vcpu()) ;
return vcpu_vlapic(target_vcpu); }
本发明提供的基于当前VCPU活跃状态的动态中断映射方法,根据当前VCPU的调度状态,将虚拟中断映射到活跃的VCPU,有效减少了因为调度延时而产生的中断请求处理延时,从而保证了虚拟中断及时注入目标VCPU中,获得较低的中断处理延时,提高了中断响应速度;当多个VCPU处于活跃状态时,选定中断负载最小的VCPU为目标VCPU,进一步保证各个VCPU之间的中断处理负载均衡,使得SMP结构下VCPU的负载更加对称,从而促进SMP结构下各个VCPU整体性能的均衡发挥。
由于本发明的方案并没有要求减小调度周期的时间片长度、增加VCPU上下文切换频率或在VCPU之间引入新的抢占机制,因此,本发明在有效减少中断响应延时的同时,没有引入额外的计算能力的浪费,有利于计算密集型负载的运行,保证了较高的CPU利用率;由于中断在操作系统中具有很重要的作用,提高中断处理速率能够使得虚拟机的整体响应能力也得到一定程度的提高。
以上详细描述了本发明的较佳具体实施例。应当理解,本领域的普通技术人员无需创造性劳动就可以根据本发明的构思做出诸多修改和变化。因此,凡本技术领域中技术人员依本发明的构思在现有技术的基础上通过逻辑分析、推理或者有限的实验可以得到的技术方案,皆应在由权利要求书所确定的保护范围内。

Claims (10)

  1. 一种基于当前VCPU调度状态的动态中断均衡映射方法,其特征在于,所述方法包括以下步骤:
    (1)每当虚拟硬件产生一个虚拟中断,虚拟I/O APIC接收所述虚拟中断,并将所述虚拟中断传送到目标VCPU的虚拟本地APIC;
    (2)中断均衡分配器在所述虚拟中断被传送到所述目标VCPU的虚拟本地APIC之前拦截所述虚拟中断;
    (3)所述中断均衡分配器分析调度器提供的调度状态信息,获得处于活跃状态的VCPU列表;
    (4)所述中断均衡分配器根据所述处于活跃状态的VCPU列表,重新选定目标VCPU;
    (5)所述中断均衡分配器将所述虚拟中断传送到步骤(3)中重新选定的目标VCPU的虚拟本地APIC;
    (6)所述目标VCPU的虚拟本地APIC向所述目标VCPU进行所述虚拟中断注入。
  2. 如权利要求1所述的方法,其特征在于,步骤(3)中选定目标VCPU的方法包括步骤:
    (31)获得所述处于活跃状态的VCPU列表中VCPU的个数;
    (32)如果所述个数为0,根据所述调度状态信息,选定所述目标VCPU;如果所述个数为1,选定所述处于活跃状态的VCPU列表中VCPU为所述目标VCPU;如果所述个数大于1,根据所述处于活跃状态的VCPU列表中VCPU的中断负载,选定所述目标VCPU。
  3. 如权利要求2所述的方法,其特征在于,步骤(32)中根据调度状态信息,选定所述目标VCPU的方法包括步骤:
    (321)读取所述调度状态信息;
    (332)选定所有VCPU中预测最快进入活跃状态的VCPU为所述目标VCPU。
  4. 如权利要求3所述的方法,其特征在于,步骤(332)中预测最快进入活跃状态的VCPU的依据是VCPU处于空闲状态。
  5. 如权利要求3所述的方法,其特征在于,如果所述VCPU均未处于空闲状态,步骤(332)中预测最快进入活跃状态的VCPU的依据是VCPU在等待队列中位置与剩余信用值。
  6. 如权利要求2所述的方法,其特征在于,步骤(32)中根据所述处于活跃状态的VCPU列表中VCPU的中断负载,选定目标VCPU的方法包括步骤:
    (323)读取当前虚拟机结构体维护的VCPU中断负载表;
    (334)比较所述处于活跃状态的VCPU列表中VCPU的中断负载,选定中断负载最小的VCPU为目标VCPU。
  7. 如权利要求1所述的方法,其特征在于,步骤(1)中所述虚拟硬件包括为虚拟设备和/或经VMM中断处理的物理设备。
  8. 如权利要求1所述的方法,其特征在于,步骤(5)中通过VT-x的事件注入机制完成所述虚拟中断注入。
  9. 如权利要求1所述的方法,其特征在于,在虚拟机SHARED_INFO结构的基础上增加成员变量sched_info,用于记录VCPU的调度状态。
  10. 如权利要求9所述的方法,其特征在于,每当调度器发生一次VCPU切换,被换入的VCPU和被换出的VCPU完成上下文交换后,被换入的VCPU成为活跃VCPU,所述活跃VCPU所属的虚拟机SHARED_INFO结构的sched_info成员变量标记为活跃;被换出的VCPU的所属的虚拟机SHARED_INFO结构的sched_info成员变量标记为等待。
PCT/CN2014/075253 2014-01-15 2014-04-14 一种基于当前vcpu调度状态的动态中断均衡映射方法 WO2015106497A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/412,188 US9697041B2 (en) 2014-01-15 2014-04-14 Method for dynamic interrupt balanced mapping based on current scheduling states of VCPUs

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410018108.3 2014-01-15
CN201410018108.3A CN103744716B (zh) 2014-01-15 2014-01-15 一种基于当前vcpu调度状态的动态中断均衡映射方法

Publications (1)

Publication Number Publication Date
WO2015106497A1 true WO2015106497A1 (zh) 2015-07-23

Family

ID=50501736

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/075253 WO2015106497A1 (zh) 2014-01-15 2014-04-14 一种基于当前vcpu调度状态的动态中断均衡映射方法

Country Status (3)

Country Link
US (1) US9697041B2 (zh)
CN (1) CN103744716B (zh)
WO (1) WO2015106497A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10095295B2 (en) * 2011-12-14 2018-10-09 Advanced Micro Devices, Inc. Method and apparatus for power management of a graphics processing core in a virtual environment
CN107003899B (zh) * 2015-10-28 2020-10-23 皓创科技(镇江)有限公司 一种中断响应方法、装置及基站
CN112347013A (zh) * 2016-04-27 2021-02-09 华为技术有限公司 一种中断处理方法以及相关装置
CN106095578B (zh) * 2016-06-14 2019-04-09 上海交通大学 基于硬件辅助技术和虚拟cpu运行状态的直接中断递交方法
CN108255572A (zh) * 2016-12-29 2018-07-06 华为技术有限公司 一种vcpu切换方法和物理主机
US10241944B2 (en) 2017-02-28 2019-03-26 Vmware, Inc. Packet processing efficiency based interrupt rate determination
CN109144679B (zh) * 2017-06-27 2022-03-29 华为技术有限公司 中断请求的处理方法、装置及虚拟化设备
CN108123850B (zh) * 2017-12-25 2020-04-24 上海交通大学 针对中断持有者抢占问题的综合调度方法及装置
US11650851B2 (en) * 2019-04-01 2023-05-16 Intel Corporation Edge server CPU with dynamic deterministic scaling
CN111124608B (zh) * 2019-12-17 2023-03-21 上海交通大学 一种面向多核虚拟机的精确低延迟中断重定向方法
CN112817690B (zh) * 2021-01-22 2022-03-18 华东计算技术研究所(中国电子科技集团公司第三十二研究所) 一种面向arm架构虚拟化领域的中断虚拟化处理方法及系统
CN114579302A (zh) * 2022-02-23 2022-06-03 阿里巴巴(中国)有限公司 资源调度方法以及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354663A (zh) * 2007-07-25 2009-01-28 联想(北京)有限公司 应用于虚拟机系统的真实cpu资源的调度方法及调度装置
CN101382923A (zh) * 2007-09-06 2009-03-11 联想(北京)有限公司 虚拟机系统及其客户操作系统的中断处理方法
US20110202699A1 (en) * 2010-02-18 2011-08-18 Red Hat, Inc. Preferred interrupt binding

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7334086B2 (en) * 2002-10-08 2008-02-19 Rmi Corporation Advanced processor with system on a chip interconnect technology
US8819676B2 (en) * 2007-10-30 2014-08-26 Vmware, Inc. Transparent memory-mapped emulation of I/O calls
US9081621B2 (en) * 2009-11-25 2015-07-14 Microsoft Technology Licensing, Llc Efficient input/output-aware multi-processor virtual machine scheduling
US9294557B2 (en) * 2013-04-19 2016-03-22 International Business Machines Corporation Hardware level generated interrupts indicating load balancing status for a node in a virtualized computing environment
US9697031B2 (en) * 2013-10-31 2017-07-04 Huawei Technologies Co., Ltd. Method for implementing inter-virtual processor interrupt by writing register data in a single write operation to a virtual register

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354663A (zh) * 2007-07-25 2009-01-28 联想(北京)有限公司 应用于虚拟机系统的真实cpu资源的调度方法及调度装置
CN101382923A (zh) * 2007-09-06 2009-03-11 联想(北京)有限公司 虚拟机系统及其客户操作系统的中断处理方法
US20110202699A1 (en) * 2010-02-18 2011-08-18 Red Hat, Inc. Preferred interrupt binding

Also Published As

Publication number Publication date
US20160259664A1 (en) 2016-09-08
CN103744716B (zh) 2016-09-07
CN103744716A (zh) 2014-04-23
US9697041B2 (en) 2017-07-04

Similar Documents

Publication Publication Date Title
WO2015106497A1 (zh) 一种基于当前vcpu调度状态的动态中断均衡映射方法
WO2014114060A1 (zh) 基于当前credit进行预测调度的处理器资源精确分配方法
WO2015020471A1 (en) Method and apparatus for distributing data in hybrid cloud environment
WO2018233370A1 (zh) 镜像同步方法、系统、设备及计算机可读存储介质
WO2013016979A1 (zh) 一种soc芯片的验证方法及系统
WO2018076812A1 (zh) 数据请求的响应方法、装置、存储介质、服务器及系统
WO2015103864A1 (zh) 内存管理的方法及Linux终端
WO2021060609A1 (ko) 복수의 엣지와 클라우드를 포함하는 분산 컴퓨팅 시스템 및 이의 적응적 지능 활용을 위한 모델 제공 방법
WO2018205376A1 (zh) 一种关联信息查询方法、终端、服务器管理系统及计算机可读存储介质
WO2018014567A1 (zh) 一种提高虚拟机性能的方法、终端、设备及计算机可读存储介质
WO2018076867A1 (zh) 数据备份的删除方法、装置、系统、存储介质和服务器
WO2016000560A1 (en) File transmission method, file transmission apparatus, and file transmission system
WO2019056733A1 (zh) 并发量控制方法、应用服务器、系统及存储介质
WO2015120774A1 (en) Network access method and apparatus applied to mobile application
WO2016013906A1 (en) Electronic apparatus for executing virtual machine and method for executing virtual machine
WO2018076433A1 (zh) 多开应用程序方法、多开应用程序装置及终端
WO2021157934A1 (ko) 무선 통신 시스템에서 네트워크 슬라이스를 생성하기 위한 장치 및 방법
WO2018090585A1 (zh) 数据虚拟化存储方法、装置、服务器和存储介质
WO2018076870A1 (zh) 数据处理方法、装置、存储介质、服务器及数据处理系统
CN112817690B (zh) 一种面向arm架构虚拟化领域的中断虚拟化处理方法及系统
WO2020218743A1 (en) Method for controlling execution of application, electronic device and storage medium for the same
Singh et al. Advanced memory reusing mechanism for virtual machines in cloud computing
WO2019205272A1 (zh) 虚拟机服务提供方法、装置、设备及计算机可读存储介质
Xu et al. Enhancing performance and energy efficiency for hybrid workloads in virtualized cloud environment
WO2018196355A1 (zh) 税优保单凭证生成方法、装置及计算机可读存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 14412188

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14878669

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 07.12.2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14878669

Country of ref document: EP

Kind code of ref document: A1