CN100390740C - Method and system for allocating entitled processor cycles for preempted virtual processors - Google Patents

Method and system for allocating entitled processor cycles for preempted virtual processors Download PDF

Info

Publication number
CN100390740C
CN100390740C CNB2006100582239A CN200610058223A CN100390740C CN 100390740 C CN100390740 C CN 100390740C CN B2006100582239 A CNB2006100582239 A CN B2006100582239A CN 200610058223 A CN200610058223 A CN 200610058223A CN 100390740 C CN100390740 C CN 100390740C
Authority
CN
China
Prior art keywords
credit rating
subregion
virtual processor
deserved
preemption
Prior art date
Application number
CNB2006100582239A
Other languages
Chinese (zh)
Other versions
CN1841331A (en
Inventor
威廉姆·约瑟夫·阿姆斯特朗
内尔施·内阿
Original Assignee
国际商业机器公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US11/094,712 priority Critical patent/US7613897B2/en
Priority to US11/094,712 priority
Application filed by 国际商业机器公司 filed Critical 国际商业机器公司
Publication of CN1841331A publication Critical patent/CN1841331A/en
Application granted granted Critical
Publication of CN100390740C publication Critical patent/CN100390740C/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Abstract

A method, apparatus, system, and signal-bearing medium that, in an embodiment, calculate a preemption credit for a partition if a virtual processor is preempted and the partition is unable to receive an entitled capacity of physical processor cycles during a dispatch window. The preemption credit is the portion of the entitled capacity that the partition is unable to receive. As long as the partition has a remaining preemption credit, in subsequent dispatch windows, a portion of the preemption credit is allocated to the virtual processor, and the preemption credit is reduced. In this way, in an embodiment, shared processor partitions may be ensured of receiving their entitled allocation of processor cycles.

Description

The method and system that distributes the deserved processor cycle for preoccupied virtual processor

Technical field

Embodiments of the present invention relate generally to computing machine.Especially, embodiments of the present invention relate generally to distribute virtual processor in the shared processing device subregion of computer system.

Background technology

Usually quote the beginning of the development of EDVAC computer system in 1948 as computer age.Since then, computer system has developed into extremely accurate equipment, and computer system can appear in a lot of different settings.Computer system generally includes such as the hardware of semiconductor and circuit board and the combination that is also referred to as the software of computer program.Along with the progressive performance that promotes computer hardware of semiconductor technology and computer architecture becomes higher, accurate more and complicated computer software has developed into the superior performance of utilizing hardware, obtains now the computer system more more powerful than A few years ago.The development that obvious improvement is exactly parallel processing, the just executed in parallel of a plurality of tasks in the computer technology.

Multiple computer software and hardware technology have been developed, with the parallel processing that helps increasing.From the angle of hardware, computing machine more and more depends on a plurality of microprocessors so that the operating load ability of increase to be provided.In addition, developed the microprocessor that some support a plurality of thread abilities of executed in parallel, many identical performance gains are provided effectively, these performance gains can obtain by using a plurality of microprocessors.From the angle of software, developed multithreading operation system and kernel, it allows computer program to carry out in a plurality of threads concomitantly, makes and can carry out a plurality of tasks simultaneously basically.

In addition, some computer realization the notion of logical partition, wherein allowing the single physical computing machine to go up substantially as a plurality of independently virtual machines operates, independently virtual machine is called logical partition, multiple resource in the physical computer (for example, processor, storer and input-output apparatus) is distributed between a plurality of logical partitions.Each logical partition is carried out independent operating system, and from the angle of user and the software application carried out on logical partition, and each logical partition is operated as computing machine fully independently.In a plurality of operating systems each runs on independent subregion, and subregion is operated under the control of zone manager or supervisory routine (hypervisor).

Not only with a lot of individual resources, for example processor distribution is given subregion, and the resource of distribution portion.Thereby, develop the notion that shared processing device subregion.Shared processing device subregion is a subregion of sharing concurrent physical processor in the processors sharing pond with other shared processing device subregion.One in the configuration parameter of shared processing device subregion is exactly the deserved ability of subregion.The deserved ability of subregion is limited to the subregion share of concurrent physical processor in the time period.Supervisory routine need guarantee that the deserved ability of shared processing device subregion is no more than the ability in shared processing device pond, and shared processing device pond is one group of processor that is used to move shared processing device subregion.Supervisory routine must guarantee that also each subregion receives ability deserved with it corresponding concurrent physical processor cycle on a time period, make each subregion receive a resource of its justice, and does not have subregion to influence performance because of lacking resource.This time period is called supervisory routine and assigns window.Each subregion is distributed one " virtual processor " so-calledly, and on behalf of shared processing device mediocre person, it manage several CPU (CPU (central processing unit)) cycle of one of processor (can change in time).

From the angle of performance,, just wish that the virtual processor of subregion receives the cycle of its distribution in the least possible assignment as long as virtual processor has task to handle.Most probable situation is that virtual processor receives its all cycles in the assignment window of single assignment.Less assignment has the less performance advantage of handover overhead, and this expense comprises the state of preserving and recovering virtual processor.Less assignment also allows effective land productivity processor cache memory.

Under some configuration, the performance objective of less assignment conflicts mutually with the functional objective that guarantees the deserved cycle of virtual processor, so, if supervisory routine attempts to provide whole deserved abilities in the assignment window of an assignment of subregion, then some virtual processor can not receive its whole deserved abilities.

For this point is described, consider that in the configuration that has four concurrent physical processors (P0, P1, P2 and P3) and five virtual processors (V0, V1, V2, V3 and V4) shown in Fig. 2 A, wherein each in five subregions is distributed a virtual processor.In this example, each is assigned window (assign window 0, assign window 1, assign window 2, assign window 3, assign window 4, assign window 5, assign window 6 and assign window 7) and represents 10msec (millisecond), makes that each the time slot representative in the table 200 is distributed to concrete virtual processor 2msec with the cpu cycle of given concurrent physical processor.In addition, in this example, each in five virtual processors (V0, V1, V2, V3 and V4) has the deserved ability of 0.8 concurrent physical processor, this means that 8 deserved abilities of assigning on the window are 8* (10msec) * .8=64msec.The time of the concurrent physical processor free time that empty slot representative in the table 200 is associated, this is because each virtual processor is merely able to utilize a concurrent physical processor at every turn.On behalf of supervisory routine, virtual processor (V0, V1, V2, V3 and V4) is assigned in the table 200 pattern of time slot attempt to provide whole deserved abilities in the assignment window of an assignment of virtual processor.

The result shown in the example is the deserved ability (V0 and V1 receive the physical cpu cycle of 64msec, and virtual processor V2 receives the physical cpu cycle of 68msec) that virtual processor V0, V1 and V2 receive its 64msec at least among Fig. 2 A.Regrettably, virtual processor V3 receives only the physical cpu cycle of 38msec, and virtual processor V4 receives only the physical cpu cycle of 40msec.Therefore, because supervisory routine attempts to provide whole deserved abilities in each assignment window of an assignment of virtual processor, virtual processor V3 and V4 do not receive the deserved ability in the physical cpu cycle of its 64msec.

Current a kind of technology of attempting to address this problem is to use very short assignment window (for example, 1msec).Short like this assignment window allows supervisory routine on available concurrent physical processor subregion to be circulated.Though this technique guarantee subregion will also produce bigger handover overhead, and cause supervisory routine to be difficult to keep the compactedness of processor assigning the deserved ability that receives them on the window, causes performance decrease.

Therefore, do not have better method at the performance objective of less assignment and guarantee to carry out balance between the functional objective of virtual processor number of deserved cpu cycle, the logical partition system will keep punching and overcome performance issue.

Summary of the invention

A kind of method, device, system and signal bearing medium are provided, in one embodiment,, then calculate the preemption credit rating of subregion if virtual processor can not be received the deserved ability in concurrent physical processor cycle by preemption during assigning window.The preemption credit rating is that part of deserved ability that subregion can not receive.As long as subregion has remaining preemption credit rating, just in assignment window subsequently, the part of preemption credit rating is distributed to virtual processor, and reduce the preemption credit rating.Like this, in one embodiment, can guarantee that shared processing device subregion receives the deserved distribution in its processor cycle.

Description of drawings

Hereinafter, numerous embodiments of the present invention is described in conjunction with the accompanying drawings:

Fig. 1 describes the block diagram of the example system be used to realize one embodiment of the present invention.

Fig. 2 A describes according to a kind of technology of attempting to provide whole deserved abilities in the assignment window of an assignment of virtual processor, the virtual processor branch is tasked the block diagram of concurrent physical processor.

Fig. 2 B describes the block diagram of the virtual processor branch being tasked concurrent physical processor according to one embodiment of the present invention.

Fig. 3 describes the block diagram according to the logical partition controll block of one embodiment of the present invention.

Fig. 4 describes the process flow diagram of concurrent physical processor being distributed to the example process of virtual processor according to one embodiment of the present invention.

Fig. 5 describes according to one embodiment of the present invention, concurrent physical processor is distributed to the process flow diagram of the example process of virtual processor on the whole time period.

Fig. 6 describes according to the process flow diagram of one embodiment of the present invention with the example process of the preemption credit rating zero clearing of subregion.

But, it is noted that accompanying drawing only illustrates example embodiment of the present invention, because the present invention may allow other same useful embodiment, therefore can not think restriction to its scope.

Embodiment

With reference to the accompanying drawings, wherein identical label is represented identical part in all several accompanying drawings, and Fig. 1 describes according to the high level block diagram of the computer system 100 that links to each other with network 130 of one embodiment of the present invention and represents.The critical piece of computer system 100 comprises one or more processors 101, primary memory 102, terminal interface 111, memory interface 112, I/O (I/O) equipment interface 113 and communicates by letter/network interface 114 that all these parts link to each other to be undertaken communicating by letter between parts by memory bus 103, I/O bus 104 and I/O Bus Interface Unit 105.

Computer system 100 comprises one or more general programmable CPU (central processing unit) (CPU) 101A, 101B, 101C and 101D,, is commonly referred to as processor 101 here.In one embodiment, computer system 100 comprises a plurality of processors of representing relative large scale system; But in another embodiment, computer system 100 can be the single cpu system alternatively.Each processor 101 is carried out the instruction that is stored in the primary memory 102, and can comprise cache memory on one or more layers plate.

Primary memory 102 is the random-access semiconductor memory that are used to store data and program.Primary memory 102 conceptive be single one chip entity, but in other embodiments, primary memory 102 is more complicated layouts, for example level of cache memory and other memory device.For example, storer can be present in the cache memory of multilayer, and these cache memories can further divide according to function, make a cache memory hold instruction and another cache memory is preserved the non-director data that processor uses.Storer can also be distributed, and is associated with different CPU or CPU group, any known as in multiple so-called non-uniform memory visit (NUMA) computer architecture.

Shown in storer 102 be included in the computing machine 100 and realize that employed resource of logical partition computing environment and main software parts, storer 102 comprise a plurality of logical partitions 134 by zone manager or supervisory routine 136 management.Though shown in subregion 134 and supervisory routine 136 be included in the storer 102 of computer system 100, but in other embodiments, some of them or all can be positioned at different computer systems, and can for example carry out remote access by network 130.In addition, computer system 100 can be used virtual addressing mechanism, and it allows the program of computer system 100 to show as just as it can only visit large-scale, single memory entity, and it is the same to visit a plurality of, more small-sized memory entities.Thereby, though shown in subregion 134 and supervisory routine 136 reside in the storer 102, these elements needn't be completely contained in the same memory device simultaneously.

Each logical partition 134 uses operating system 142, and operating system 142 is with the main operation of the mode steering logic subregion 134 identical with the case of non-partitioned computer operating system.For example, can use the i5OS operating system that can get to realize each operating system 142 from International Business Machine Corporation (IBM), but in other embodiments, operating system 142 can be Linux, AIX, UNIX, MicrosoftWindows, perhaps any appropriate operating system.Simultaneously, some or all in the operating system 142 can be same to each other or different to each other.As is known in the art, can support the logical partition 134 of any amount, and owing to add subregion or delete subregion in computing machine 100 from computing machine 100, the quantity that whenever resides in the logical partition 134 in the computing machine 100 can dynamic change.

Each logical partition 134 is executed in separately or independently in the storage space, thereby from be executed in each this logical partition each used 144 angle, and each logical partition shows with independently the case of non-partitioned computing machine is identical.Like this, the user uses usually without any need for particular arrangement to be used for partitioned environment.The character of given logical partition 134 as the single virtual computing machine just can wish to support by stages communication to communicate with one another to allow logical partition, just looks like that logical partition is positioned on the independent physical machine the same.Like this, in some implementation, can wish to support to communicate with one another VLAN (LAN) the adapter (not shown) that is associated with supervisory routine 136 by procotol such as Ethernet protocol to allow logical partition 134.In another embodiment, Objunctive network adaptor can be bridged to physical adapter, and for example network interface adapter 114.According to each embodiment of the present invention, also support other to support the mode of communicating by letter between the subregion.

Though shown in supervisory routine 136 be arranged in storer 102, in other embodiments, whole or its a part of can being implemented in firmware or the hardware of supervisory routine 102.Supervisory routine 136 can be carried out the low layer partition management functions, for example page table management, also can carry out more high-rise partition management functions, for example create and delete subregion, parallel I/O safeguard, to a plurality of subregion 134 distribution processor, storer and other hardware or software resource.

In one embodiment, supervisory routine 136 comprises the instruction that can carry out on processor 101, and the perhaps statement that can be made an explanation by the instruction that is executed in processor 101 is to carry out the function that further describes below with reference to Fig. 4, Fig. 5 and Fig. 6.In another embodiment, supervisory routine 136 can be implemented in microcode or the firmware.In another embodiment, supervisory routine 136 can be implemented in the hardware by logic gate and/or other suitable hardware technology.

Supervisory routine 136 is the part of available resources in each logical partition 134 Distribution Calculation machine 100 statically and/or dynamically.For example, can distribute one or more processors 101 and/or one or more hardware thread to each logical partition 134, and the part of available memory space.Logical partition 134 can be shared concrete software and/or hardware resource, and for example processor 101, makes given resource to be used by the logical partition more than.In possibility, can only give only logical partition 134 with the software and hardware resources allocation at every turn.Additional resource, for example mass storage, backup of memory, user's input, network connects and the I/O adapter is assigned to one or more in the logical partition 134 usually.Resource can be distributed in many ways, for example, according to bus one by one distribute or one by one resource carry out, and a plurality of logical partition is shared the resource on the same bus.Even can give a plurality of logical partitions with some resources allocations at every turn.Here the resource that is identified is an example, and any suitable resource that can be assigned with all can be used.

Supervisory routine 136 comprises logical partition controll block 146, and supervisory routine is used logical partition controll block Resources allocation between subregion 134.Logical partition controll block 146 is described further with reference to figure 3 below.

Memory bus 103 provides data communication channel, is used for transmitting data between processor 101, primary memory 102 and I/O Bus Interface Unit 105.I/O Bus Interface Unit 105 further links to each other with the I/O of system bus 104, is used for transmitting data and transmitting data from a plurality of I/O unit to a plurality of I/O unit.I/O Bus Interface Unit 105 communicates by the I/O of system bus 104 and a plurality of I/O interface units 111,112,113 and 114, and these I/O interface units are also referred to as I/O processor (IOP) or I/O adapter (IOA).For example, system I/O bus 104 can be the industrial standard pci bus, perhaps any other suitable bussing technique.

The support of I/O interface unit is communicated by letter with a plurality of storeies and I/O equipment.For example, terminal interface unit 111 is supported one or more user terminals 121,122,123 and 124 be connected.Memory interface unit 112 supports one or more direct access storage devices (DASD) 125,126 and 127 (though can be miscellaneous equipment alternatively, but rotary magnetic disk drive memory device normally, comprise disk drive array, being configured as for main frame is single large storage facilities) connection.The content of primary memory 102 can be stored in direct access storage device 125,126 and 127, and can therefrom fetch.

I/O and miscellaneous equipment interface 113 provide the interface to any multiple other input-output apparatus or other type equipment.Two this equipment, printer 128 and facsimile recorder 129 are shown in the illustrative embodiments of Fig. 1, but in other embodiments, may exist much may be other dissimilar this equipment.Network interface 114 provides one or more communication ports from computer system 100 to other digital device and computer system; For example, this passage can comprise one or more networks 130.

Though the memory bus 103 shown in Fig. 1 is as simple relatively unified bus structure, between processor 101, primary memory 102 and I/O bus interface 105, provide the direct communication passage, but in fact, memory bus 103 can comprise a plurality of different buses or communication port, they can be arranged as any various ways, for example in point-to-point link layering, in star or the reticulate texture, many levels bus, parallel and redundant channel or the like.In addition, though I/O bus interface 105 and I/O bus 104 are depicted as separately individual unit, in fact computer system 100 can comprise a plurality of I/O Bus Interface Units 105 and/or a plurality of I/O bus 104.Though a plurality of I/O interface units are shown, it separates the I/O of system bus 104 and a plurality of communication ports that lead to a plurality of I/O equipment, and in other embodiments, wherein some or all of I/O equipment directly link to each other with one or more system I/O buses.

Computer system described in Fig. 1 100 has the terminal 121,122,123 and 124 of a plurality of connections, for example, can represent multi-user's " large-scale " computer system.Usually, in this case, though the present invention is not limited to the system of any specific dimensions, the actual quantity of optional equipment is more than shown in Fig. 1.Computer system 100 can be a single user system alternatively, usually only comprise the input of unique user display and keyboard, perhaps can be server or the similar devices that almost or not has end user's interface, but it receive the request from other computer systems (client).In other embodiments, computer system 100 can be embodied as personal computer, portable computer, laptop computer or notebook computer, PDA (personal digital assistant), input board type computer, pocket computer, phone, pager, automobile, TeleConference Bridge, electrical equipment, the perhaps electronic equipment of any other suitable type.

Network 130 can be any suitable network or combination of network, and can support to be applicable to go to/from the data of computer system 100 and/or any suitable agreement of code communication.In numerous embodiments, network 130 can be represented memory device or memory device combination, and it both can directly also can link to each other with computer system 100 indirectly.In one embodiment, network 130 can be supported Infiniband (unlimited wave band) technology.In another embodiment, but network 130 support of wireless communication.In another embodiment, network 130 can be supported hard-wired communications, for example telephone line or cable.In another embodiment, network 130 can be supported ethernet ieee (Institute of Electrical and Electronics Engineers) 802.3x standard.In another embodiment, network 130 may be the internet, and can support IP (Internet Protocol).In another embodiment, network 130 can be Local Area Network or wide area network (WAN).In another embodiment, network 130 can be the hotspot service provider network.In another embodiment, network 130 can be an Intranet.In another embodiment, network 130 can be GPRS (general packet radio service) network.In another embodiment, network 130 can be FRS (family wireless service) network.In another embodiment, network 130 can be any suitable cellular data network or based on the radio network technique of sub-district.In another embodiment, network 130 can be an IEEE 802.11B wireless network.In another embodiment, network 130 can be any suitable network or combination of network.Though a network 130 is shown, in other embodiments, can has any amount (identical or different type) network of (comprising zero).

Should understand, Fig. 1 is intended to describe from high level the representative critical piece of computer system 100, separate part can have than higher complicacy shown in Figure 1, can have those parts that are different from or are additional to parts shown in Fig. 1, and the quantity of these parts, type and configuration can change.A plurality of specific example of this additional complexity or additional deformation are here disclosed; Should understand, these are the mode of example, and not necessarily have only these distortion.

Various software parts shown in Figure 1, a plurality of embodiments of realization the present invention can be realized with multiple mode, these modes comprise uses various computer software application, routine, parts, program, object, module, data structure or the like, these are called " computer program " here, perhaps are called " program " simply.Computer program generally includes and one or morely resides in the different memory of computer system 100 and the instruction in the memory device constantly in difference, and when reading and carrying out this instruction by the one or more processors 101 in the computer system 100, it makes computer system 100 carry out necessary step, comprises the step or the element of one embodiment of the present invention each side with execution.

And, though and hereinafter will in the context of full function computer system, describe each embodiment of the present invention, but each embodiment of the present invention can be distributed as the program product of various ways, though be used in fact realizing this distribution signal bearing medium particular type how, all use the present invention equally.Can transmit this embodiment functional programs of definition to computer system 100 by multiple signal bearing medium, this signal bearing medium includes but not limited to:

(1) information of permanent storage in the storage medium that can not rewrite for example, is additional to or is positioned at the read only memory devices of computer system, for example, and CD-ROM, DVD-R or DVD+R;

(2) be stored in variable information in the rewritable storage medium, for example hard disk drive (for example, DASD 125,126 or 127), CD-RW, DVD-RW, DVD+RW, DVD-RAM or disk; Perhaps

(3) information that is transmitted by communication media, for example by computing machine or telephone network, for example network 130, comprise radio communication.

The sort signal bearing medium is represented each embodiment of the present invention when carrying the machine readable instructions that instructs function of the present invention.

Each embodiment of the present invention also can be used as the part of the service relevant with client company, non-profit organization, government organs, internal organizational structure etc. and issues.The each side of these embodiments can comprise allocating computer system carrying out some or all method of describing here, and can comprise software systems and the web service that is implemented in some or all method described herein of disposing.The each side of these embodiments also can comprise analyze client company, generate suggestion in response to analyzing, generate software with realize the part suggestion, with software be incorporated in existing process and the infrastructure, measure the use of the method and system of describing here, to user's cost of distribution, and use these method and systems to charge to the user.In addition, the various programs of describing hereinafter can be discerned based on its application that realizes in the specific embodiment of the invention.But any specific program term subsequently is only for easy purpose, thereby each embodiment of the present invention only should not be limited to and uses in any concrete application by this term sign and/or hint.

Exemplary environments shown in Fig. 1 is not intended to limit the present invention.In fact, under the prerequisite that does not depart from the scope of the invention, can use other selectable hardware and/or software environment.

Fig. 2 A describes the block diagram of the virtual processor branch being tasked concurrent physical processor, its according to technology attempt in the assignment window of an assignment of virtual processor, to provide whole deserved abilities.From the angle of performance,, just wish that the virtual processor of subregion receives the cycle of its distribution in the least possible assignment as long as virtual processor has task to handle.Most probable situation is that virtual processor receives its all cycles in the assignment window of single assignment.Less assignment has the less performance advantage of handover overhead, and this expense comprises the state of preserving and recovering virtual processor.Less assignment also allows effective land productivity processor cache memory.

Under some configuration, the performance objective of less assignment conflicts mutually with the functional objective in the deserved cycle that guarantees virtual processor, so, if supervisory routine attempts to provide whole deserved abilities in the assignment window of an assignment of subregion, then some virtual processors can not receive its whole deserved abilities.

For this point is described, consider that in the configuration that has four concurrent physical processors (P0, P1, P2 and P3) and five virtual processors (V0, V1, V2, V3 and V4) shown in the table 200, wherein each in five subregions distributed a virtual processor.In the example shown in Fig. 2 A, assign window (assign window 0, assign window 1, assign window 2, assign window 3, each assigns window 4, assign window 5, assign window 6 and assign window 7) and represent 10msec (millisecond), make each time slot in the table 200 represent given concurrent physical processor cpu cycle is distributed to concrete virtual processor 2msec.In addition, each in five example virtual processors (V0, V1, V2, V3 and V4) has 0.8 deserved ability of concurrent physical processor, this means that 8 deserved abilities of assigning on the window are 8* (10msec) * .8=64 msec.The time of the concurrent physical processor free time that empty slot representative in the table 200 is associated, this is because each virtual processor is merely able to utilize a concurrent physical processor at every turn.On behalf of supervisory routine, virtual processor (V0, V1, V2, V3 and V4) is assigned in the table 200 pattern of time slot attempt to provide whole deserved abilities in the assignment window of an assignment of virtual processor.

The result shown in the example is the deserved ability that virtual processor V0, V1 and V2 receive its 64msec at least among Fig. 2 A.Virtual processor V0 receives 8msec from concurrent physical processor P0, and P1 receives 56msec from concurrent physical processor.Virtual processor V1 receives 8msec from concurrent physical processor P1, and P2 receives 56msec from concurrent physical processor.Virtual processor V2 receives 4msec from concurrent physical processor P1, and P2 receives 8msec from concurrent physical processor, and receives 56msec from concurrent physical processor P3.

Regrettably, the physical cpu cycle that virtual processor V3 receives only 38msec, (30msec was from concurrent physical processor P0,8msec is from concurrent physical processor P3), virtual processor V4 receives only the physical cpu cycle (40msec is from concurrent physical processor P0) of 40msec.Therefore, because supervisory routine attempts to provide whole deserved abilities in each assignment window of an assignment of virtual processor, virtual processor V3 and V4 do not receive the deserved ability in the physical cpu cycle of its 64msec.

Fig. 2 B describes the block diagram 250 of the virtual processor branch being tasked concurrent physical processor according to one embodiment of the present invention.Configuration shown in Fig. 2 B has four concurrent physical processors (P0, P1, P2 and P3), corresponding to processor 101, but in another embodiment, can have the concurrent physical processor of any amount.Fig. 2 B also comprises five virtual processors (V0, V1, V2, V3 and V4), wherein distribute a virtual processor to each of five example subregions, any one in the subregion 134 for example, but in other embodiments, the subregion that can have any amount, a subregion can have the virtual processor that is associated of any amount.

Each is assigned window (assign window 0, assign window 1, assign window 2, assign window 3, assign window 4, assign window 5, assign window 6 and assign window 7) and represents 10msec (millisecond), makes that each the time slot representative in the table 250 is distributed to concrete virtual processor 2msec with the cpu cycle of given concurrent physical processor.In other embodiments, can use any suitable assignment window.

In this example, each in five virtual processors (V0, V1, V2, V3 and V4) has 0.8 deserved ability of concurrent physical processor, this means that eight deserved abilities of assigning on the windows be 8* (10msec) * .8=64msec.In another embodiment, can use any deserved ability, and deserved ability can use CPU time, cpu cycle, concurrent physical processor number percent or mark, perhaps any other suitable unit represents.The time of 101 free time of concurrent physical processor that empty slot representative in the table 250 is associated, this is because each virtual processor once is merely able to utilize a concurrent physical processor.The example modes that virtual processor (V0, V1, V2, V3 and V4) is assigned to time slot in the table 250 is represented the supervisory routine 136 of using preemption techniques, and this will further describe with reference to figure 4, Fig. 5 and Fig. 6 below.

Use the technology of one embodiment of the present invention, supervisory routine 136 is switched in preoccupied virtual processor, and detect subregion 134 and can not use in the whole cycles in the current assignment window, so supervisory routine 136 is given 134 1 preemption credit ratings of subregion.The quantity in the preemption credit rating is subregion 134 in current assignment window out of use processor cycle.When assigning the window variation, subregion 134 keeps its preemption credit rating cycles, be different from the example of Fig. 2 A, and when assigning the window variation, the processor cycle that can lose subregion.Supervisory routine 136 allows subregion 134 to use the preemption credit rating processor cycle in assignment group of windows subsequently then, to remedy the chance that is lost in current assignment window.When a subregion 134 uses its preemption credit ratings during the cycle, another subregion 134 or another component district 134 may not receive and assign its deserved cycle in the window.Then, this makes those subregions 134 be given the preemption credit rating cycle, and these subregions can use the preemption credit rating cycle in serial subsequently assignment window.Because the deserved ability of subregion can not surpass the ability in this pond in the shared pool, the preemption credit rating of accumulation can not increase without limitation.

In the stable state embodiment, by the working load of all subregion 134 operation CPU restrictions, the preemption credit rating is by a component district 134 circulations, and the ability in shared processing device pond will be used fully, and virtual processor will receive long assignment, and this has obtained the performance of improving.The cost of one embodiment of the present invention is exactly that subregion 134 needn't be assigned at each and received in the window its deserved cycle.Subregion 134 can receive in the specific assignment window and be less than its deserved cycle, can use the preemption credit rating cycle of accumulation then in assignment window subsequently, formerly assigns the processor cycle of losing in the window to remedy.Supervisory routine 136 needn't guarantee to distribute its deserved ability to subregion 134 during each assigns window, but as long as the dispatch window stomatopod is enough short, the working load of subregion 134 just can not be affected.

The preemption credit rating technology of one embodiment of the present invention is represented in the table 250 of Fig. 2 B.Represent the use in preemption credit rating cycle in the table 250 with the time slot of asterisk (" * ") expression.For example, during assigning window 0, virtual processor V4 does not receive whole distribution (being 8msec) in its deserved concurrent physical processor cycle in example.On the contrary, virtual processor V4 receives only the concurrent physical processor cycle of 2msec in assigning window 0.Therefore, in response in the ending that distributes window 0 by preemption, virtual processor V4 receives the preemption credit rating (all parts that does not receive in the quotient) of 6msec in assigning window 0, this preemption credit rating is being assigned window 1, assigning window 2 and is being assigned use in the window 3, to receive additional concurrent physical processor period allocated, indicated as asterisk.

In a similar manner, by preemption, so virtual processor V3 receives the preemption credit rating, assigning window 2, assigning in window 3 and the assignment window 4 and use by this preemption credit rating in the ending of assigning window 1 for virtual processor V3, to receive additional concurrent physical processor period allocated, indicated as asterisk.

In a similar manner, by preemption, so virtual processor V2 receives the preemption credit rating, assigning window 3, assigning in window 4 and the assignment window 5 and use by this preemption credit rating in the ending of assigning window 2 for virtual processor V2, to receive additional concurrent physical processor period allocated, indicated as asterisk.

In a similar manner, by preemption, so virtual processor V1 receives the preemption credit rating, assigning window 4, assigning in window 5 and the assignment window 6 and use by this preemption credit rating in the ending of assigning window 3 for virtual processor V1, to receive additional concurrent physical processor period allocated, indicated as asterisk.In a similar manner, by preemption, so virtual processor V0 also receives the preemption credit rating, assigning window 5, assigning window 6 and assigning use in the window 7 by this preemption credit rating in the ending of assigning window 4 for virtual processor V0.Further be described below with reference to 3 pairs of preemption credit ratings of figure.

Fig. 3 describes the block diagram according to the logical partition controll block 146 of one embodiment of the present invention.Logical partition controll block 146 comprises virtual processor controll block 302 and preemption credit rating 325.Though only represented a virtual processor controll block 302, but in other embodiments, the virtual processor controll block that can have any amount, for example a virtual processor controll block is represented each virtual processor, perhaps is used for a virtual processor controll block of each subregion 134.Virtual processor controll block 302 comprises record 305,310 and 315, but in other embodiments, can have the record of any amount that has any proper data.Each record 305,310 and 315 comprises partition identifier field 320 and virtual processor identifier field 330, but in other embodiments, can have more or less field.

The subregion that partition identifier field 320 identifies in the subregion 134 that is associated with record.The virtual processor that virtual processor identifier field 330 signs are associated with record.Thereby, the virtual processor 330 by the subregion 134 of partition identifier 320 signs is distributed in each representative in the record 305,310 and 315, supervisory routine 136 can be distributed to 330 a period of times of virtual processor with concurrent physical processor 101 in assigning window, to carry out being associated with subregion 134 of task.Therefore, as used here, virtual processor is to distribute to a period of time of concurrent physical processor of a subregion in the subregion 134 or several cycles of concurrent physical processor, and virtual processor is used such as the data structure of the record in the virtual processor controll block 302 and represented.Virtual processor controll block 302 is used by supervisory routine 136, further describes with reference to figure 4, Fig. 5 and Fig. 6 as following.Preemption credit rating 325 sign has used the deserved processor of assigning in the window before the cycle at the subregion that is associated with logical partition controll block 146, this subregion by the virtual processor that is associated by the quantity of the preemption credit rating that preemption produced.

Fig. 4 describes the example process flow figure that concurrent physical processor is distributed to virtual processor according to one embodiment of the present invention.Control starts from piece 400.Control proceeds to piece 405 then, and wherein supervisory routine 136 determines concurrent physical processor is distributed to the needs of virtual processor.For example, supervisory routine 136 can based on to the supervisory routine request virtual processor cycle so that operating system 142 or use 144 subregions 134 of carrying out functions is determined these needs.

Then, control proceeds to piece 410, and wherein supervisory routine 136 determines that whether the execution of virtual processor is by another virtual processor preemption.Being defined as in the if block 410 is true, then virtual processor is by another virtual processor preemption, so control proceeds to piece 415, wherein supervisory routine 136 determines whether subregion can receive whole distribution capability in its deserved processor cycle in current assignment window.Being defined as in the if block 415 is true, and then control proceeds to piece 420, and wherein supervisory routine 136 is distributed to virtual processor with the All Time section, further describes with reference to figure 5 as following.Control then proceeds to piece 499, and wherein the logic of Fig. 4 is returned.

Be defined as vacation in the if block 415, then subregion can not obtain the whole deserved distribution in its processor cycle in current assignment window, so control proceeds to piece 425, wherein supervisory routine 136 is calculated as preemption credit rating 325 for subregion the deserved distribution in that part of processor cycle that subregion can not use in current assignment window.In one embodiment, supervisory routine 136 is calculated as preemption credit rating 325: the deserved whole distribution of subregion deduct (current time deducts the concluding time of assigning window).

Then, control proceeds to piece 430, and wherein supervisory routine 136 is kept at the preemption credit rating of being calculated 325 in the logical partition controll block 146 that is associated with this subregion.Control proceeds to piece 499 then, and wherein the logic of Fig. 4 is returned.

If in the vacation that is defined as of piece 410, then virtual processor is not as yet by preemption, so control proceeds to piece 435, wherein supervisory routine 136 is given virtual processor with whole deserved processor period allocated in assigning window.Then, control proceeds to piece 499, and wherein the logic of Fig. 4 is returned.

Fig. 5 describes the example process flow figure that concurrent physical processor is distributed to virtual processor according to one embodiment of the present invention.Control starts from piece 500.Then, control proceeds to piece 505, and wherein supervisory routine 136 is distributed to virtual processor with whole assignment window time periods of the cpu cycle of concurrent physical processor 101.Then, control proceeds to piece 510, and wherein supervisory routine 136 determines whether subregion has any remaining preemption credit rating 325.If being defined as in piece 510 is true, then subregion has remaining preemption credit rating 325, so control proceeds to piece 515, wherein supervisory routine will be to the Distribution Calculation of subregion: preemption credit rating 325 deducts (time of assigning the window end deducts the current time), and wherein this distribution is not more than the preemption credit rating.Then, control proceeds to piece 520, and wherein supervisory routine 136 is distributed to virtual processor with concurrent physical processor distribution time period cpu cycle of being calculated.Then, control proceeds to piece 525, and wherein supervisory routine 136 is set at current preemption credit rating 325 with the preemption credit rating 325 of subregion and deducts the distribution time period of being calculated.Then, control proceeds to piece 599, and wherein the logic of Fig. 5 is returned.

If be defined as vacation in piece 510, then subregion does not have any remaining preemption credit rating 325, so control proceeds to piece 530, wherein supervisory routine 136 is distributed to virtual processor with concurrent physical processor on the time period all assigning windows.Then, control proceeds to piece 599, and wherein the logic of Fig. 5 is returned.

Fig. 6 describes according to one embodiment of the present invention and is used for example process flow figure with preemption credit rating 325 zero clearings of subregion.Control starts from piece 600.Then, control proceeds to piece 605, and wherein virtual processor notice supervisory routine 136 virtual processors no longer have the residue task of this subregion.Control then proceeds to piece 610, wherein supervisory routine preemption credit rating 325 zero clearings that will be associated with this subregion.Then, control proceeds to piece 699, and wherein the logic of Fig. 6 is returned.

Formerly in the detailed description of exemplary embodiment of the invention, (wherein identical label is represented components identical) with reference to the accompanying drawings, it forms a part of the present invention, and is with to realizing that the mode that concrete illustrative embodiments of the present invention is carried out illustrative represents.Description to these embodiments is enough detailed, so that those of skill in the art can realize the present invention, but can use other embodiment, and under the prerequisite that does not depart from the scope of the invention, can carry out logic, machinery, electronics and other change.The different examples of used in this manual word " embodiment " must not represented identical embodiment, but can represent identical embodiment yet.Therefore, previous detailed description is not as restriction, and scope of the present invention is only limited by appended claims.

In the description formerly, a plurality of details have been described, so that complete understanding of the present invention to be provided.But, can not have to realize the present invention under the prerequisite of these details.In other example, be not shown specifically known circuit, structure and technology, so that can not make the present invention hard to understand.

Claims (17)

1. one kind is the method that preoccupied virtual processor distributes the deserved processor cycle, comprising:
Determine that virtual processor is by preemption and can not receive during the assignment window of subregion whether the deserved ability in concurrent physical processor cycle is true;
If this is defined as very, then during at least one assignment window subsequently, give this virtual processor with that part of deserved capability distribution that this virtual processor can not receive; And
If be defined as vacation, then during assignment window subsequently, should deserved capability distribution give this virtual processor.
2. method according to claim 1, wherein this distribution further comprises:
The deserved ability in the concurrent physical processor cycle that can not receive during this assigns window based on this subregion is calculated the preemption credit rating of this subregion.
3. method according to claim 2, wherein this distribution further comprises:
Judge whether this subregion also has any remaining this preemption credit rating; And
If be judged as very, then calculate the part of this preemption credit rating, for assignment window subsequently, this part is distributed to this virtual processor, and from this preemption credit rating, deduct this part.
4. method according to claim 3, wherein the step of this calculating preemption credit rating part comprises:
Calculate this preemption credit rating and the difference that finishes the remaining time to this assignment window.
5. method according to claim 3 further comprises:
If be judged as vacation,, give this virtual processor with the deserved capability distribution in this concurrent physical processor cycle then for subsequently assignment window.
6. method according to claim 2 further comprises:
The residue task that is not associated with this subregion in response to this virtual processor is with this preemption credit rating zero clearing.
7. computer system comprises:
Concurrent physical processor; And
With the memory device that instruction is encoded, in the time of wherein on being executed in this concurrent physical processor, this instruction comprises:
Determine that virtual processor is by preemption and can not receive during the assignment window of subregion whether the deserved ability in concurrent physical processor cycle is true;
If be defined as very, then during at least one assignment window subsequently, give this virtual processor with that part of deserved capability distribution that this virtual processor can not receive, and
If be defined as vacation, then during assignment window subsequently, should deserved capability distribution give this virtual processor.
8. computer system according to claim 7, wherein this distribution further comprises:
The deserved ability part in the concurrent physical processor cycle that can not receive during this assigns window based on this subregion is calculated the preemption credit rating of this subregion.
9. computer system according to claim 8, wherein this distribution further comprises:
Judge whether this subregion also has any remaining this preemption credit rating; And
If be judged as very, then calculate the part of this preemption credit rating, for assignment window subsequently, this part of this preemption credit rating is distributed to this virtual processor, and from this preemption credit rating, deduct this part.
10. computer system according to claim 9, wherein this calculating preemption credit rating part comprises:
Calculate this preemption credit rating and the difference that finishes the remaining time to this assignment window.
11. computer system according to claim 9, wherein this distribution further comprises:
If be judged as vacation,, give this virtual processor with the deserved capability distribution in this concurrent physical processor cycle then for subsequently assignment window.
12. computer system according to claim 8, wherein this instruction further comprises:
There is not remaining being associated of task in response to this virtual processor, with this preemption credit rating zero clearing with this subregion.
13. one kind is used to dispose computer method, comprises:
Configure the computer as and determine that virtual processor is by preemption and can not receive during the assignment window of subregion whether the deserved ability in concurrent physical processor cycle is true;
Configure the computer as,, then during at least one assignment window subsequently, give this virtual processor that part of deserved capability distribution that this virtual processor can not receive if this is defined as very; And
Configure the computer as,, then during assignment window subsequently, should deserved capability distribution give this virtual processor if this is defined as vacation.
14. method according to claim 13 wherein disposes this computing machine and further comprises so that distribute:
Configure the computer as, the deserved ability part in the concurrent physical processor cycle that can not receive during this assigns window based on this subregion is calculated the preemption credit rating of this subregion.
15. method according to claim 14, wherein this configuration computing machine distributes further and comprises:
Configure the computer as and judge whether this subregion also has any remaining this preemption credit rating; And
Configure the computer as, if this subregion also has any remaining this preemption credit rating, then calculate the part of this preemption credit rating, for assignment window subsequently, this part of this preemption credit rating is distributed to this virtual processor, and from this preemption credit rating, deduct this part.
16. method according to claim 15 wherein disposes this computing machine and comprises so that calculate the part of preemption credit rating:
Configure the computer as and calculate this preemption credit rating and the difference that finishes the remaining time to this assignment window.
17. method according to claim 15 further comprises:
Configure the computer as,,, give this virtual processor the deserved capability distribution in this concurrent physical processor cycle then for subsequently assignment window if this subregion does not have any remaining this preemption credit rating.
CNB2006100582239A 2005-03-30 2006-02-24 Method and system for allocating entitled processor cycles for preempted virtual processors CN100390740C (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/094,712 US7613897B2 (en) 2005-03-30 2005-03-30 Allocating entitled processor cycles for preempted virtual processors
US11/094,712 2005-03-30

Publications (2)

Publication Number Publication Date
CN1841331A CN1841331A (en) 2006-10-04
CN100390740C true CN100390740C (en) 2008-05-28

Family

ID=37030370

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100582239A CN100390740C (en) 2005-03-30 2006-02-24 Method and system for allocating entitled processor cycles for preempted virtual processors

Country Status (2)

Country Link
US (1) US7613897B2 (en)
CN (1) CN100390740C (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8281308B1 (en) * 2007-07-23 2012-10-02 Oracle America, Inc. Virtual core remapping based on temperature
US7844970B2 (en) * 2006-08-22 2010-11-30 International Business Machines Corporation Method and apparatus to control priority preemption of tasks
US7698530B2 (en) * 2007-03-28 2010-04-13 International Business Machines Corporation Workload management in virtualized data processing environment
US8219995B2 (en) * 2007-03-28 2012-07-10 International Business Machins Corporation Capturing hardware statistics for partitions to enable dispatching and scheduling efficiency
US7698531B2 (en) * 2007-03-28 2010-04-13 International Business Machines Corporation Workload management in virtualized data processing environment
US7617375B2 (en) * 2007-03-28 2009-11-10 International Business Machines Corporation Workload management in virtualized data processing environment
JP4523965B2 (en) * 2007-11-30 2010-08-11 株式会社日立製作所 Resource allocation method, resource allocation program, and operation management apparatus
US8312456B2 (en) * 2008-05-30 2012-11-13 International Business Machines Corporation System and method for optimizing interrupt processing in virtualized environments
US8677372B2 (en) * 2009-12-17 2014-03-18 International Business Machines Corporation Method, data processing program, and computer program product to compensate for coupling overhead in a distributed computing system, and corresponding overhead calculator for a distributed computing system and corresponding computer system
JP5388909B2 (en) * 2010-03-09 2014-01-15 株式会社日立製作所 Hypervisor, computer system, and virtual processor scheduling method
JP5376058B2 (en) * 2010-06-30 2013-12-25 富士通株式会社 System control device, information processing system, and data saving and restoring method for information processing system
US8122167B1 (en) 2010-08-06 2012-02-21 International Business Machines Corporation Polling in a virtualized information handling system
US8918784B1 (en) * 2010-12-21 2014-12-23 Amazon Technologies, Inc. Providing service quality levels through CPU scheduling
US9817700B2 (en) * 2011-04-26 2017-11-14 International Business Machines Corporation Dynamic data partitioning for optimal resource utilization in a parallel data processing system
US9183030B2 (en) 2011-04-27 2015-11-10 Microsoft Technology Licensing, Llc Virtual processor allocation techniques
US9052932B2 (en) * 2012-12-17 2015-06-09 International Business Machines Corporation Hybrid virtual machine configuration management
US9043575B2 (en) * 2013-03-15 2015-05-26 International Business Machines Corporation Managing CPU resources for high availability micro-partitions
US9244826B2 (en) 2013-03-15 2016-01-26 International Business Machines Corporation Managing CPU resources for high availability micro-partitions
US9189381B2 (en) * 2013-03-15 2015-11-17 International Business Machines Corporation Managing CPU resources for high availability micro-partitions
US10649796B2 (en) * 2014-06-27 2020-05-12 Amazon Technologies, Inc. Rolling resource credits for scheduling of virtual computer resources
US9710039B2 (en) * 2014-07-17 2017-07-18 International Business Machines Corporation Calculating expected maximum CPU power available for use
US9886306B2 (en) * 2014-11-21 2018-02-06 International Business Machines Corporation Cross-platform scheduling with long-term fairness and platform-specific optimization
US9934287B1 (en) * 2017-07-25 2018-04-03 Capital One Services, Llc Systems and methods for expedited large file processing

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5210872A (en) * 1991-06-28 1993-05-11 Texas Instruments Inc. Critical task scheduling for real-time systems
US5386561A (en) * 1992-03-31 1995-01-31 International Business Machines Corporation Method of integrated system load control through dynamic time-slicing in a virtual storage environment
JPH0954699A (en) * 1995-08-11 1997-02-25 Fujitsu Ltd Process scheduler of computer
JPH10301793A (en) * 1997-04-30 1998-11-13 Toshiba Corp Information processor and scheduling method
US7448036B2 (en) * 2002-05-02 2008-11-04 International Business Machines Corporation System and method for thread scheduling with weak preemption policy
US20050192937A1 (en) * 2004-02-26 2005-09-01 International Business Machines Corporation Dynamic query optimization

Also Published As

Publication number Publication date
US7613897B2 (en) 2009-11-03
CN1841331A (en) 2006-10-04
US20060230400A1 (en) 2006-10-12

Similar Documents

Publication Publication Date Title
US10263842B2 (en) Dynamic configuration in cloud computing environments
US20170075617A1 (en) Instantiating a virtual machine with a virtual non-uniform memory architecture
US10545781B2 (en) Dynamically deployed virtual machine
US9984648B2 (en) Delivering GPU resources to a migrating virtual machine
US9542223B2 (en) Scheduling jobs in a cluster by constructing multiple subclusters based on entry and exit rules
US10120726B2 (en) Hybrid virtual machine configuration management
US20200073668A1 (en) Independent mapping of threads
US8726295B2 (en) Network on chip with an I/O accelerator
US8775698B2 (en) Performing an all-to-all data exchange on a plurality of data buffers by performing swap operations
CN102402458B (en) Virtual machine and/or multi-level scheduling support on systems with asymmetric processor cores
US8276155B2 (en) Method, system, and storage medium for managing computer processing functions
US8526422B2 (en) Network on chip with partitions
CA2522096C (en) Concurrent access of shared resources
CN102082692B (en) Method and equipment for migrating virtual machines based on network data flow direction, and cluster system
CN100570565C (en) Operating system service method and system based on strategy are provided in supervisory routine
US7873701B2 (en) Network on chip with partitions
US20190089574A1 (en) Computer Cluster Arrangement for Processing a Computation Task and Method for Operation Thereof
US7484043B2 (en) Multiprocessor system with dynamic cache coherency regions
US8108196B2 (en) System for yielding to a processor
CN102341786B (en) Altering access to a fibre channel fabric
US8020168B2 (en) Dynamic virtual software pipelining on a network on chip
US8572159B2 (en) Managing device models in a virtual machine cluster environment
US20140006534A1 (en) Method, system, and device for dynamic energy efficient job scheduling in a cloud computing environment
CN101271409B (en) Device and method for migration of a logical partition, and equipment therefor
CN1315076C (en) An apparatus, method and program product for transferring standby resource entitlement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080528

Termination date: 20190224