CN103744716B - A kind of dynamically interruption Well-Balanced Mapping method based on current VCPU dispatch state - Google Patents
A kind of dynamically interruption Well-Balanced Mapping method based on current VCPU dispatch state Download PDFInfo
- Publication number
- CN103744716B CN103744716B CN201410018108.3A CN201410018108A CN103744716B CN 103744716 B CN103744716 B CN 103744716B CN 201410018108 A CN201410018108 A CN 201410018108A CN 103744716 B CN103744716 B CN 103744716B
- Authority
- CN
- China
- Prior art keywords
- vcpu
- virtual
- interrupt
- active state
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4812—Task transfer initiation or dispatching by interrupt, e.g. masked
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/24—Handling requests for interconnection or transfer for access to input/output bus using interrupt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45504—Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/461—Saving or restoring of program or task context
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45579—I/O management, e.g. providing access to device drivers or storage
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Debugging And Monitoring (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses dynamically interruption Well-Balanced Mapping method based on current VCPU dispatch state, after the virtual i/o APIC of SMP virtual machine receives a virtual interrupt, when needing some VCPU that this virtual interrupt is mapped to virtual machine, the all VCPU according to current VM dispatch state in VMM scheduler, analyze part VCPU being in active state, and being mapped to by this virtual interrupt enlivens in VCPU, to obtain relatively low interrupt processing time delay.If there is multiple VCPU to be in active state simultaneously, consider that each enlivens the interrupt load of VCPU the most further, and this interruption is mapped to present load minimum enliven VCPU, the interrupt processing load balancing being further ensured that between each VCPU, make the load of VCPU under SMP structure more symmetrical, thus promote that the equilibrium of each VCPU overall performance under SMP structure plays.
Description
Technical field
The present invention relates to computer system virtualization, virtual machine interrupt processing, scheduling virtual machine device field, specifically,
Relate to a kind of dynamically interruption Well-Balanced Mapping method based on current VCPU dispatch state.
Background technology
Intel Virtualization Technology is generally by the calculating originally needing multiple stage physical equipment to realize or storage Function Integration Mechanism to function
In the most powerful physical server, it is achieved thereby that the integration of hardware resource and reallocation, improve hardware device
Utilization rate, in cloud computing and data center build, play very important effect.Virtual machine is relative to truly
Physical equipment, embody a lot of significantly advantage.First, cloud computing and Intel Virtualization Technology allow enterprise customer to use
Virtual machine carries out operation, thus user need not purchase a set of real information technoloy equipment, effectively reduces enterprise customer in dimension
Protect the financial cost in IT infrastructure and management cost.Second, virtual machine creating mode based on mirror image is the most practical,
Can the problem such as processing server calamity is standby well, reconstruction, batch duplicating by mirror back-up technology.Additionally, based on
The Virtual Machine Manager mode of virtual machine monitor (Virtual Machine Monitor, VMM) can be from software and hardware
Aspect sweetly disposition physical resource and the mapping relations of virtual resource, it is provided that performance isolation, security isolation, condition monitoring
Etc. a series of necessary functions.
Virtual machine monitor refers to the software management layers being present between hardware and legacy operating system, and it is mainly made
With being management real physical equipment, such as physical cpu, internal memory etc., and it is corresponding void by the hardware abstraction of bottom
Intend equipment interface, make multiple operating system obtain the most required virtual hardware, thus enable them to simultaneously same
Run on individual physical equipment.
The virtual machine monitor introduced between physical equipment and virtual opetrating system, as intermediate layer, inevitably makes
The performance of virtual opetrating system is affected by certain, and one of them important aspect is the increase in the response time delay of interruption.
When multiple VCPU share a physical cpu, needing physical cpu is carried out time-sharing multiplex, VCPU is in the scheduling of VMM
Stand-by period in queue can reflect that this VCPU in the response time of event, and then makes the response of the events such as interruption
Time delay seriously increases.
Academia, at present for the research of virtual interrupt latency issue, is concentrated mainly on monokaryon virtual machine.Empty at monokaryon
In plan machine, owing to a virtual machine only holds a VCPU, use of solution of interrupt latency is reduced physical cpu more
The means such as chance are seized in the timeslice length of time-sharing multiplex, the switching frequency improved between VCPU and increase.These sides
Method can reduce scheduler latency to a certain extent, but also unavoidably increase the frequency of VCPU context switching simultaneously
Rate, introduces extra overhead, thus wastes the computing capability of CPU.For symmetric multiprocessor
The virtual machine of (Symmetric Multi-Processor, SMP) framework, the research of academic circles at present then focuses mostly at VCPU
In the problem of cooperative scheduling, i.e. synchronization between multiple VCPU and communication issue in virtual machine, to event time delays such as interruptions
The research of problem is the most relatively fewer.
At present, in the realization of the example of virtual machine monitor, such as KVM Yu XEN, at main employing virtual interrupt
The mode of reason equipment processes the interruption in virtual machine.In XEN, virtual machine is divided into two kinds according to virtualized mode
Type, the most fully virtualized and half virtualization.Under fully virtualized pattern, virtual opetrating system needs not move through any repairing
Changing, XEN realizes virtual interrupt by virtual interrupt processing platform and processes;And in half virtualization, pseudo operation system
The kernel of system needs to be adapted to the operating system of host, and XEN realizes interrupting and thing by event channel mechanism
Part processes.KVM does not then have fully virtualized and paravirtualized difference, and the interruption of virtual machine is all by void in KVM
Intend what interrupt processing apparatus processed.
In the interruption framework of virtual machine, the same with physical platform, each VCPU is a corresponding virtual local
APIC (Advanced Programmable Interrupt Controller) is used for receiving interruption.Virtual platform is also
Contain the virtual units such as virtual i/o APIC, virtual PIC to send interruption.The same with VCPU, virtual i/o APIC,
Virtual local APIC, virtual PIC are the software entitys that VMM safeguards.The generally virtual unit of SMP architecture virtual machine,
Such as Microsoft Loopback Adapters etc. call virtual i/o APCI and send interrupt requests, and virtual i/o APIC is according to interrupting and VCPU
Mapping relations select VCPU as the recipient interrupted, interrupt requests is sent to the virtual of this target VCPU
In local APIC.Virtual local APIC injects the machine-processed note being finally completed virtual interrupt further with the event of VT-x
Enter.
It is pointed out that in the virtual machine of many VCPU framework, each VCPU under the scheduling of scheduler to physics
CPU carries out time-sharing multiplex, and then the situation that part VCPU is active, part VCPU is in queueing condition occurs.Existing
Virtual machine interrupt processing technology in, when virtual i/o APIC needs to be mapped in VCPU interrupt requests, do not have
Consider the current dispatch state of VCPU, be interrupt distribution VCPU blindly.Virtual interrupt is assigned to be in scheduling etc.
When the VCPU of state, VCPU wait time delay in scheduling queue becomes a part for the response time delay of this interruption,
Thus substantially increase the operating lag of interrupt requests, reduce the processing speed of interrupt requests.
Summary of the invention
In view of the drawbacks described above of prior art, the technical problem to be solved is to provide a kind of based on current VCPU
Dispatch state and the dynamically interruption Well-Balanced Mapping method of interrupt processing load analysis, both considered while interrupting mapping
VCPU as interrupting the object construction dispatch state in the scheduler of VMM that maps, it is contemplated that each VCPU it
Between interruption load balancing, thus significantly reduce interrupt processing time delay.
For achieving the above object, the invention provides one based on current VCPU dispatch state and interrupt processing load analysis
Dynamically interrupt Well-Balanced Mapping method, it is characterised in that: when the virtual i/o APIC of SMP virtual machine receives a void
Plan is had no progeny, when needing some VCPU that this virtual interrupt is mapped to virtual machine, according to all VCPU of current VM
Dispatch state in VMM scheduler, analyzes part VCPU being in active state, and is reflected by this virtual interrupt
It is mapped to enliven in VCPU, to obtain relatively low interrupt processing time delay.If there is multiple VCPU to be in active state simultaneously,
Consider that each enlivens the interrupt load of VCPU the most further, and this interruption is mapped to minimum the enlivening of present load
VCPU, to realize VCPU interrupt load balance.
The present invention provides a kind of dynamically interruption Well-Balanced Mapping method based on current VCPU dispatch state, it is characterised in that
Method comprises the following steps:
(1) producing a virtual interrupt whenever virtual hardware, virtual i/o APIC receives virtual interrupt, and by virtual
Interrupt being sent to the virtual local APIC of target VCPU;
(2) interrupt equilibrium distributor before virtual interrupt is sent to the virtual local APIC of target VCPU, intercepts void
Intend interrupting;
(3) interrupt equilibrium distributor and check the schedule status information that scheduler provides, it is thus achieved that be in the VCPU of active state
List;
(4) equilibrium distributor is interrupted according to being in the VCPU list of active state, selected target VCPU again;
(5) interrupt equilibrium distributor and virtual interrupt is sent in step (4) the virtual of again selected target VCPU
Local APIC;
(6) the virtual local APIC of target VCPU carries out virtual interrupt injection to target VCPU.
Further, in step (4), the method for selected target VCPU includes step:
(41) number of VCPU during acquisition is in the VCPU list of active state;
(42) if number is 0, according to schedule status information, selected target VCPU;If number is 1, selected
Being in VCPU in the VCPU list of active state is target VCPU;If number is more than 1, according to being in active state
VCPU list in the interrupt load of VCPU, selected target VCPU.
When only one of which VCPU in the VCPU list be in active state, this VCPU selected is target VCPU, by virtual
Interrupt being mapped to this and enliven in VCPU, to obtain relatively low interrupt processing time delay.
Further, according to schedule status information in step (42), the method for selected target VCPU includes step:
(421) schedule status information is read;
(422) selected all VCPU predict that the VCPU entering active state the soonest is target VCPU.
When the VCPU list be in active state does not has VCPU, VCPU is not namely had to be in active state, selected
It is target VCPU that prediction enters the VCPU of active state the soonest, virtual interrupt is mapped to this and enlivens in VCPU, to obtain
Obtain relatively low interrupt processing time delay.
Further, in step (422), prediction enters the foundation of the VCPU of active state the soonest is that VCPU is in the free time
State.
The VCPU being in idle condition may enter active state the soonest, this is because CREDIT scheduler design allows
The VCPU of IDLE state seizes when needs response event, and therefore the selected VCPU being in idle condition is target
VCPU, is mapped to virtual interrupt in this VCPU, to obtain relatively low interrupt processing time delay.
Further, if described VCPU is all not in idle condition, in step (422), prediction enters active the soonest
The foundation VCPU of the VCPU of state is position and residue credit value in waiting list.
For all of VCPU, pay the utmost attention to its location in respective waiting list, be more proximate to head of the queue
VCPU more enters active state the soonest.Then, when there being the VCPU of identical ranking, consider further that VCPU's
Residue credit value, select residue credit value big as target VCPU, to obtain relatively low interrupt processing time delay.
Further, according to being in the interrupt load of VCPU in the VCPU list of active state in step (42), choosing
The method of VCPU of setting the goal includes step:
(423) the VCPU interrupt load table that current virtual machine structure is safeguarded is read;
(424) comparing and be in the interrupt load of VCPU in the VCPU list of active state, selected interrupt load is minimum
VCPU is target VCPU.
When the VCPU list be in active state has multiple VCPU, virtual interrupt is mapped to Current interrupt and loads
Low enlivens VCPU, realizes VCPU interrupt load balance while obtaining relatively low interrupt processing time delay.
Further, in step (1), virtual hardware includes setting for virtual unit and/or through the physics of VMM interrupt processing
Standby.
Further, step (6) injects mechanism by the event of VT-x and complete virtual interrupt injection.
Further, on the basis of virtual machine SHARED_INFO structure, increase member variable sched_info, be used for
The dispatch state of record VCPU.
Further, whenever scheduler occurs a VCPU switching, and the VCPU changed to and the VCPU being paged out completes
After context swap, the VCPU changed to becomes and enlivens VCPU, enlivens the virtual machine SHARED_INFO belonging to VCPU
The sched_info member variable of structure is labeled as enlivening;The affiliated virtual machine SHARED_INFO of the VCPU being paged out
The sched_info member variable of structure is labeled as waiting.
Compared with prior art, the mapping method that dynamically interrupts based on current VCPU active state that the present invention provides has
The most useful technique effect:
(1) according to the dispatch state of current VCPU, virtual interrupt is mapped to active VCPU, effectively reduce because of
The interrupt requests produced for scheduler latency processes time delay, thus ensure that virtual interrupt injects in target VCPU in time,
Obtain relatively low interrupt processing time delay, improve Response time.
(2) when multiple VCPU are in active state, the VCPU of selected interrupt load minimum is target VCPU, enters
One step ensures the interrupt processing load balancing between each VCPU so that under SMP structure, the load of VCPU is more symmetrical,
Thus promote that the equilibrium of each VCPU overall performance under SMP structure plays.
(3) do not require to reduce the timeslice length of dispatching cycle due to the solution of the present invention, VCPU is upper and lower in increase
Literary composition switching frequency or introduce new preemption mechanism between VCPU, therefore, the present invention is effectively reducing interrupt response time delay
While, do not introduce the waste of extra computing capability, the beneficially operation of computation-intensive load, it is ensured that relatively
High cpu busy percentage.
(4) play a very important role due to interruption tool in an operating system, improve interrupt processing speed and enable to virtual
The Whole Response ability of machine also obtains a certain degree of raising.
Accompanying drawing explanation
Fig. 1 is the interrupt processing structural representation on physical platform;
Fig. 2 is the structural representation of virtual interrupt processor in existing SMP virtual machine technique;
Fig. 3 is the schematic diagram of the virtual interrupt processor of one embodiment of the present of invention;
Fig. 4 be the virtual interrupt processor shown in Fig. 3 virtual interrupt map set up process schematic.
Detailed description of the invention
Below with reference to accompanying drawing, the technique effect of design, concrete structure and the generation of the present invention is described further, with
It is fully understood from the purpose of the present invention, feature and effect.
Fig. 1 is the interrupt processing structural representation on physical platform.On physical platform, during I/O equipment first passes through
Disconnected controller (I/O APIC or PIC) sends interrupt requests, this interrupt requests be sent to by pci bus be
In system bus, the local APIC parts of ideal CPU receive this to interrupt, and target CPU starts to process this interruption.
Fig. 2 is the structural representation of virtual interrupt processor in existing SMP virtual machine technique.Environment at virtual machine
Under, VMM is also required to present a virtual interrupt framework similar with physical discontinuity framework for Client OS.With
As on real physical platform, under virtual SMP architecture, each VCPU is an also corresponding virtual local
APIC is used for receiving interruption, and VMM then provides virtual i/o APIC for sending interruption.When certain virtual unit,
Such as when Microsoft Loopback Adapters etc. need to send an interruption, the interface that first virtual unit calls virtual i/o APIC and provide is sent out
Send interruption.Virtual i/o APIC according to the interrupt requests obtained, select a VCPU as the processor of this interruption,
Interruption is given local APIC corresponding to this VCPU.Finally, virtual local APIC utilizes the event implanter of VT-x
System injects interrupts in purpose VCPU of correspondence, VCPU process and interrupt.Being similar to VCPU, virtual interrupt processes
The virtual local APIC, virtual i/o APIC, the virtual PIC etc. that relate in device are the software entitys that VMM safeguards.
Fig. 3 is the schematic diagram of the virtual interrupt processor of one embodiment of the present of invention.Process with existing virtual interrupt
Device is compared, and adds interruption equilibrium distributor in the virtual interrupt processor in the present embodiment, interrupts at equilibrium distributor
Between original virtual i/o APIC and virtual local APIC.
Interrupt equilibrium distributor from the scheduler of VMM, read the basic schedule information of VCPU, analyze current time
The dispatch state of the VCPU of virtual machine, i.e. which VCPU are in active state, and which VCPU is in scheduling wait state.
When virtual i/o APIC needs to send an interrupt requests, interrupt equilibrium distributor and intercept and capture this interrupt requests, according to
Some knowledge replaces virtual i/o APIC selection to process target VCPU interrupted, and then sends this interruption to target VCPU
Virtual local APIC, initiate interrupt inject.
Fig. 4 be the virtual interrupt processor shown in Fig. 3 virtual interrupt map set up process schematic.In figure specifically
Illustrate to interrupt the course of work of equilibrium distributor.When interrupting equilibrium distributor and needing to process an interrupt requests, in
Disconnected equilibrium distributor is primarily based on the schedule information obtained from scheduler to analyze the tune of each VCPU of current virtual machine
Degree and interrupt load situation.
Uniquely enliven VCPU if there is currently only, i.e. in the VCPU of this virtual machine, only one of which VCPU achieves at present
Actual operation power, runs on physical cpu, is then recorded as target VCPU by this VCPU.
If being in active state currently without VCPU, i.e. all VCPU of virtual machine are all in the waiting list of scheduler
In, then interrupt equilibrium distributor and the possibility of each VCPU entrance active state is speculated, select can enter the soonest
Enter the VCPU of active state as target VCPU.The foundation speculated carrys out the information that child scheduler provides, and such as VCPU is
The no IDLE state that is in, the residue credit value etc. of each VCPU under CREDIT scheduler.In general, at CREDIT
In scheduler, the VCPU being in IDLE state is most suitable target selection, because CREDIT scheduler design allows
The VCPU of IDLE state seizes when needs response event.For all of VCPU, pay the utmost attention to it each
Waiting list in location, the VCPU being more proximate to head of the queue enters active state the soonest.Then,
When there being the VCPU of identical ranking, consider further that the residue credit value of VCPU, select residue credit value big as target
VCPU, to obtain relatively low interrupt processing time delay.
If currently there being multiple VCPU to be in active state, interruption equilibrium distributor is successively read each and is in active state
The interrupt load situation of VCPU, chooses the VCPU of minimum load as target VCPU.After determining target VCPU, in
Disconnected equilibrium distributor continues interrupt processing flow process, replaces virtual i/o APIC to initiate to interrupt injecting.
The present invention carries out interrupting the dynamic of equilibrium assignment based on current VCPU active state and VCPU interrupt processing load analysis
State is interrupted mapping method and is specifically comprised the following steps that
Step 1, there is once effective VCPU switching, the VCPU changed to and quilt whenever the VCPU scheduler of VMM
After the VCPU swapped out completes context swap, the VCPU changed to becomes and enlivens VCPU, really transports on physical cpu
OK, now virtual Machine Architecture body belonging to this VCPU according to the ID of VCPU by the VCPU status indication changed to for living
Jump, simultaneously the status indication of the VCPU being paged out for waiting.
Step 2, whenever virtual unit produce virtual interrupt need to process time, at this virtual interrupt by virtual i/o
APIC intercepts this virtual interrupt before being mapped to VCPU, check the VCPU state table of current virtual machine simultaneously, obtain source
VCPU list in active state.If enlivening only one of which VCPU in VCPU list, then choosing it is target VCPU,
Go to step 5;If current time does not has VCPU to be in active state, go to step 3;If there being more than one VCPU to be in work
Jump state, goes to step 4.
Step 3, current time do not have VCPU to be in active state, then each VCPU scheduling provided according to scheduler
Information speculates the VCPU that can enter active state the earliest, and choosing it is target VCPU, goes to step 5.
Step 4, when there is multiple VCPU being in active state, the most further read current virtual machine structure dimension
The VCPU interrupt load table protected, compares the load of each Current interrupt enlivening VCPU, chooses the VCPU that load is relatively minimal
As target VCPU.
The interrupt load state of step 5, more fresh target VCPU, and interrupt being mapped to choice of dynamical by current virtual
In target VCPU, finally continue to interrupt injecting.
Specifically, the interruption mapping scheme of the present invention implements step and is broadly divided into two parts, and Part I step is such as
Under:
Step 1, on the basis of the SHARED_INFO structure of original virtual machine increase a new member variable
Sched_info, for recording the dispatch state of VCPU.The main code of this step is accomplished by
struct shared_info{
…
uint64_t sched_info;
}
Step 2, there is once effective VCPU switching, the VCPU changed to and quilt whenever the VCPU scheduler of VMM
After the VCPU swapped out completes context swap, the VCPU changed to becomes and enlivens VCPU, i.e. real on physical cpu
Running, now the VCPU status indication changed to according to the ID of VCPU is by the virtual Machine Architecture body belonging to this VCPU
Active, simultaneously the status indication of the VCPU being paged out for waiting.The main code of this step is accomplished by
static void schedule(void)()
{
…
context_switch(prev,next);
if(check_pre_ok(prev,next))
{
Prev-> domain-> shared_info-> native.sched_info=((prev-> domain-> shared_in
fo->native.sched_info>>(prev->vcpu_id-1))-1)<<(prev->vcpu_id-1);
Next-> domain-> shared_info-> native.sched_info=((next-> domain-> shared_in
fo->native.sched_info>>(next->vcpu_id-1))+1)<<(next->vcpu_id-1);
}
context_switch(prev,next);
}
The Part II implementing step of the present invention is revised existing virtual i/o APIC and is mapped the interruption of virtual interrupt
Scheme, specifically comprises the following steps that
Step 1, whenever VMM produce a virtual interrupt time, virtual i/o APIC is mapped to VCPU at this virtual interrupt
Intercept this virtual interrupt before, check the VCPU state table of current virtual machine simultaneously, draw the VCPU being in active state
List.If only one of which VCPU in current active VCPU list, then choosing it is target VCPU, goes to step 4;If
Current time does not has VCPU to be in active state, goes to step 2;If there being more than one VCPU to be in active state, turn
Step 3.
Step 2, current time do not have VCPU to be in active state, then each VCPU scheduling provided according to scheduler
Information speculates the VCPU that can enter active state the earliest, and choosing it is target VCPU, goes to step 4.
Step 3, when there is multiple VCPU being in active state, read further current virtual machine structure and safeguard
VCPU interrupt load table, compare the load of each Current interrupt enlivening VCPU, choose the relatively minimal VCPU of load
As target VCPU, go to step 4.
The interrupt load state of step 4, more fresh target VCPU, and interrupt being mapped to choice of dynamical by current virtual
In target VCPU, continue to interrupt injecting.
The core code of this part is accomplished by
The based on current VCPU active state of present invention offer dynamically interrupts mapping method, according to the tune of current VCPU
Degree state, is mapped to virtual interrupt active VCPU, effectively reduces the interrupt requests produced because of scheduler latency
Process time delay, thus ensure that virtual interrupt injects in target VCPU in time, it is thus achieved that relatively low interrupt processing time delay, carry
High Response time;When multiple VCPU are in active state, the VCPU of selected interrupt load minimum is target
VCPU, the interrupt processing load balancing being further ensured that between each VCPU so that the load of VCPU under SMP structure
More symmetrical, thus promote that the equilibrium of each VCPU overall performance under SMP structure plays.
Do not require due to the solution of the present invention to reduce the timeslice length of dispatching cycle, increase the switching of VCPU context
Frequency or introduce new preemption mechanism between VCPU, therefore, the present invention while effectively reducing interrupt response time delay,
Do not introduce the waste of extra computing capability, the beneficially operation of computation-intensive load, it is ensured that higher CPU
Utilization rate;Play a very important role owing to interrupting tool in an operating system, improve interrupt processing speed and enable to virtual
The Whole Response ability of machine also obtains a certain degree of raising.
The preferred embodiment of the present invention described in detail above.Should be appreciated that those of ordinary skill in the art without
Need creative work just can make many modifications and variations according to the design of the present invention.Therefore, all in the art
Technical staff the most on the basis of existing technology can by logical analysis, reasoning, or a limited experiment
With the technical scheme obtained, all should be in the protection domain being defined in the patent claims.
Claims (10)
1. a dynamically interruption Well-Balanced Mapping method based on current VCPU dispatch state, it is characterised in that described side
Method comprises the following steps:
(1) producing a virtual interrupt whenever virtual hardware, virtual i/o APIC receives described virtual interrupt, and will
Described virtual interrupt is sent to the virtual local APIC of target VCPU;
(2) interrupt equilibrium distributor and be sent to the virtual local APIC of described target VCPU at described virtual interrupt
Intercept described virtual interrupt before;
(3) described interruption equilibrium distributor analyzes the schedule status information that scheduler provides, it is thus achieved that be in active state
VCPU list;
(4) described interruption equilibrium distributor according to described in be in the VCPU list of active state, selected target again
VCPU;
(5) described virtual interrupt is sent in step (4) again selected target VCPU by described interruption equilibrium distributor
Virtual local APIC;
(6) the virtual local APIC of described target VCPU carries out described virtual interrupt injection to described target VCPU.
2. the method for claim 1, it is characterised in that the method bag of selected target VCPU in step (4)
Include step:
(41) it is in the number of VCPU in the VCPU list of active state described in acquisition;
(42) if described number is 0, according to described schedule status information, selected described target VCPU;If institute
Stating number is 1, selected described in be in VCPU in the VCPU list of active state be described target VCPU;If it is described
Number is more than 1, according to the interrupt load of VCPU in the described VCPU list being in active state, and selected described target
VCPU。
3. method as claimed in claim 2, it is characterised in that according to schedule status information in step (42), choosing
The method of fixed described target VCPU includes step:
(421) described schedule status information is read;
(422) selected all VCPU predict that the VCPU entering active state the soonest is described target VCPU.
4. method as claimed in claim 3, it is characterised in that in step (422), prediction enters the soonest and enlivens shape
The foundation of the VCPU of state is that VCPU is in idle condition.
5. method as claimed in claim 3, it is characterised in that if described VCPU is all not in idle condition,
In step (422), prediction enters the foundation of the VCPU of active state the soonest is VCPU position and residue in waiting list
Credit value.
6. method as claimed in claim 2, it is characterised in that be in active state described in basis in step (42)
VCPU list in the interrupt load of VCPU, the method for selected target VCPU includes step:
(423) the VCPU interrupt load table that current virtual machine structure is safeguarded is read;
(424) compare described in be in the interrupt load of VCPU in the VCPU list of active state, selected interrupt load is
Little VCPU is target VCPU.
7. the method for claim 1, it is characterised in that described in step (1), virtual hardware includes for void
Propose standby and/or through the physical equipment of VMM interrupt processing.
8. the method for claim 1, it is characterised in that by the event implanter of VT-x in step (6)
Make described virtual interrupt to inject.
9. the method for claim 1, it is characterised in that on the basis of virtual machine SHARED_INFO structure
Increase member variable sched_info, for recording the dispatch state of VCPU.
10. method as claimed in claim 9, it is characterised in that whenever scheduler occurs a VCPU switching, quilt
After the VCPU changed to and the VCPU being paged out complete context swap, the VCPU changed to becomes and enlivens VCPU, institute
State and enliven the sched_info member variable of the virtual machine SHARED_INFO structure belonging to VCPU and be labeled as enlivening;Quilt
The sched_info member variable of the affiliated virtual machine SHARED_INFO structure of the VCPU swapped out is labeled as waiting.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410018108.3A CN103744716B (en) | 2014-01-15 | 2014-01-15 | A kind of dynamically interruption Well-Balanced Mapping method based on current VCPU dispatch state |
PCT/CN2014/075253 WO2015106497A1 (en) | 2014-01-15 | 2014-04-14 | Dynamic interrupt balanced mapping method based on current vcpu scheduling state |
US14/412,188 US9697041B2 (en) | 2014-01-15 | 2014-04-14 | Method for dynamic interrupt balanced mapping based on current scheduling states of VCPUs |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410018108.3A CN103744716B (en) | 2014-01-15 | 2014-01-15 | A kind of dynamically interruption Well-Balanced Mapping method based on current VCPU dispatch state |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103744716A CN103744716A (en) | 2014-04-23 |
CN103744716B true CN103744716B (en) | 2016-09-07 |
Family
ID=50501736
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410018108.3A Active CN103744716B (en) | 2014-01-15 | 2014-01-15 | A kind of dynamically interruption Well-Balanced Mapping method based on current VCPU dispatch state |
Country Status (3)
Country | Link |
---|---|
US (1) | US9697041B2 (en) |
CN (1) | CN103744716B (en) |
WO (1) | WO2015106497A1 (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10095295B2 (en) * | 2011-12-14 | 2018-10-09 | Advanced Micro Devices, Inc. | Method and apparatus for power management of a graphics processing core in a virtual environment |
WO2017070861A1 (en) * | 2015-10-28 | 2017-05-04 | 华为技术有限公司 | Interrupt response method, apparatus and base station |
CN112347013A (en) * | 2016-04-27 | 2021-02-09 | 华为技术有限公司 | Interrupt processing method and related device |
CN106095578B (en) * | 2016-06-14 | 2019-04-09 | 上海交通大学 | Method is submitted in direct interruption based on hardware ancillary technique and virtual cpu operating status |
CN108255572A (en) * | 2016-12-29 | 2018-07-06 | 华为技术有限公司 | A kind of VCPU switching methods and physical host |
US10241944B2 (en) | 2017-02-28 | 2019-03-26 | Vmware, Inc. | Packet processing efficiency based interrupt rate determination |
CN109144679B (en) * | 2017-06-27 | 2022-03-29 | 华为技术有限公司 | Interrupt request processing method and device and virtualization equipment |
CN108123850B (en) * | 2017-12-25 | 2020-04-24 | 上海交通大学 | Comprehensive scheduling method and device for preemption problem of interrupt holders |
US11650851B2 (en) * | 2019-04-01 | 2023-05-16 | Intel Corporation | Edge server CPU with dynamic deterministic scaling |
CN111124608B (en) * | 2019-12-17 | 2023-03-21 | 上海交通大学 | Accurate low-delay interrupt redirection method for multi-core virtual machine |
CN112817690B (en) * | 2021-01-22 | 2022-03-18 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | Interrupt virtualization processing method and system for ARM architecture virtualization field |
CN114327814A (en) * | 2021-12-09 | 2022-04-12 | 阿里巴巴(中国)有限公司 | Task scheduling method, virtual machine, physical host and storage medium |
CN114579302A (en) * | 2022-02-23 | 2022-06-03 | 阿里巴巴(中国)有限公司 | Resource scheduling method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101354663A (en) * | 2007-07-25 | 2009-01-28 | 联想(北京)有限公司 | Method and apparatus for scheduling true CPU resource applied to virtual machine system |
CN101382923A (en) * | 2007-09-06 | 2009-03-11 | 联想(北京)有限公司 | Virtual machine system and interrupt handling method for customer operating system of the virtual machine system |
US8312195B2 (en) * | 2010-02-18 | 2012-11-13 | Red Hat, Inc. | Managing interrupts using a preferred binding between a device generating interrupts and a CPU |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7334086B2 (en) * | 2002-10-08 | 2008-02-19 | Rmi Corporation | Advanced processor with system on a chip interconnect technology |
US8261265B2 (en) * | 2007-10-30 | 2012-09-04 | Vmware, Inc. | Transparent VMM-assisted user-mode execution control transfer |
US9081621B2 (en) * | 2009-11-25 | 2015-07-14 | Microsoft Technology Licensing, Llc | Efficient input/output-aware multi-processor virtual machine scheduling |
US9294557B2 (en) * | 2013-04-19 | 2016-03-22 | International Business Machines Corporation | Hardware level generated interrupts indicating load balancing status for a node in a virtualized computing environment |
US9697031B2 (en) * | 2013-10-31 | 2017-07-04 | Huawei Technologies Co., Ltd. | Method for implementing inter-virtual processor interrupt by writing register data in a single write operation to a virtual register |
-
2014
- 2014-01-15 CN CN201410018108.3A patent/CN103744716B/en active Active
- 2014-04-14 US US14/412,188 patent/US9697041B2/en active Active
- 2014-04-14 WO PCT/CN2014/075253 patent/WO2015106497A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101354663A (en) * | 2007-07-25 | 2009-01-28 | 联想(北京)有限公司 | Method and apparatus for scheduling true CPU resource applied to virtual machine system |
CN101382923A (en) * | 2007-09-06 | 2009-03-11 | 联想(北京)有限公司 | Virtual machine system and interrupt handling method for customer operating system of the virtual machine system |
US8312195B2 (en) * | 2010-02-18 | 2012-11-13 | Red Hat, Inc. | Managing interrupts using a preferred binding between a device generating interrupts and a CPU |
Also Published As
Publication number | Publication date |
---|---|
US9697041B2 (en) | 2017-07-04 |
CN103744716A (en) | 2014-04-23 |
US20160259664A1 (en) | 2016-09-08 |
WO2015106497A1 (en) | 2015-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103744716B (en) | A kind of dynamically interruption Well-Balanced Mapping method based on current VCPU dispatch state | |
Phillips et al. | Adapting a message-driven parallel application to GPU-accelerated clusters | |
CN102262557B (en) | Method for constructing virtual machine monitor by bus architecture and performance service framework | |
Suzuki et al. | {GPUvm}: Why Not Virtualizing {GPUs} at the Hypervisor? | |
CN102147749B (en) | Mechanism to emulate user-level multithreading on an OS-sequestered sequencer | |
Xu et al. | Adaptive task scheduling strategy based on dynamic workload adjustment for heterogeneous Hadoop clusters | |
CN101183315A (en) | Paralleling multi-processor virtual machine system | |
US20230127141A1 (en) | Microservice scheduling | |
CN101788920A (en) | CPU virtualization method based on processor partitioning technology | |
US20120291027A1 (en) | Apparatus and method for managing hypercalls in a hypervisor and the hypervisor thereof | |
TW201217954A (en) | Power management in a multi-processor computer system | |
CN106844007A (en) | A kind of virtual method and system based on spatial reuse | |
Alvarruiz et al. | An energy manager for high performance computer clusters | |
Lv et al. | Virtualization challenges: a view from server consolidation perspective | |
Hong et al. | FairGV: fair and fast GPU virtualization | |
CN106250217A (en) | Synchronous dispatching method between a kind of many virtual processors and dispatching patcher thereof | |
US11886898B2 (en) | GPU-remoting latency aware virtual machine migration | |
WO2017160427A1 (en) | Wireless component state based power management | |
Chang et al. | On construction and performance evaluation of a virtual desktop infrastructure with GPU accelerated | |
Zhao et al. | Efficient sharing and fine-grained scheduling of virtualized GPU resources | |
CN105677481B (en) | A kind of data processing method, system and electronic equipment | |
López García et al. | Resource provisioning in Science Clouds: Requirements and challenges | |
US9575788B2 (en) | Hypervisor handling of processor hotplug requests | |
Wan | A network virtualization approach in many-core processor based cloud computing environment | |
Qouneh et al. | Optimization of resource allocation and energy efficiency in heterogeneous cloud data centers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |