EP1405171A4 - Method and apparatus to use task priority to scale processor performance - Google Patents
Method and apparatus to use task priority to scale processor performanceInfo
- Publication number
- EP1405171A4 EP1405171A4 EP02732004A EP02732004A EP1405171A4 EP 1405171 A4 EP1405171 A4 EP 1405171A4 EP 02732004 A EP02732004 A EP 02732004A EP 02732004 A EP02732004 A EP 02732004A EP 1405171 A4 EP1405171 A4 EP 1405171A4
- Authority
- EP
- European Patent Office
- Prior art keywords
- performance
- task
- priority
- processor
- level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/324—Power saving characterised by the action undertaken by lowering clock frequency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/329—Power saving characterised by the action undertaken by task scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/3296—Power saving characterised by the action undertaken by lowering the supply or operating voltage
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline, look ahead
- G06F9/3867—Concurrent instruction execution, e.g. pipeline, look ahead using instruction pipelines
- G06F9/3869—Implementation aspects, e.g. pipeline latches; pipeline synchronisation and clocking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present invention relates to a method for using task priority to scale processor performance.
- a method and apparatus that reads a tasks priority level and associates that level with a processor performance level, and then sets the processor to that performance level.
- variable performance products include Inters XScale architecture, and processors made by Transmeta.
- systems will vary processor performance in order to manage processor heat build up. By slowing processor speed the system can dissipate excess heat. In this manner the system manages the trade off between processor performance and power consumption to avoid overheating.
- systems that utilize batteries for operation like laptop computers and the like, similar trade offs exist.
- the system adjusts the performance of the processor. Higher performance correlates to greater power consumption, and therefore, less operating time on a single battery.
- Computer systems utilize a variety of techniques to alter or adjust processor performance.
- One such means consists of adjusting the processor's internal clock speed. This directly affects performance in that processors generally execute instructions based on the clock rate.
- Other systems increase the power savings by regulating voltage with the clock rate. At lower clock rates the processor can operate correctly at a lower voltage.
- a previously unrelated feature of modern computer system comprises the concept of task priority.
- a task is an operating system concept whereby a concurrent thread of control is recognized and controlled by the operating system.
- the operating system associates certain information with the task to coordinate the system to accomplish the task.
- the operating system maintains certain information related to the task for bookkeeping purposes.
- the process of swapping between tasks is often called context switching.
- One of the pieces of information that the operating systems tracks for a particular task consists of the task priority.
- tasks receive priorities that help the operating system determine the relative importance of the tasks that compete for processor time.
- An object of the present invention comprises using task priority to scale processor performance in a computer device.
- the present invention intends to overcome the difficulties encountered heretofore.
- the invention uses a general-purpose computer with a processor capable of operating at a plurality of performance levels, and has an operating system with the capability to set a plurality of task priority levels for tasks performed on the computer.
- the method reads a task's priority level, associates the task's priority level with a performance level, and sets the processor to operate at the performance level.
- Figure 1 depicts a flow for the operation of an operating system API.
- Figure 2 depicts a flow chart for setting processor performance based on a map value.
- Figure 3 depicts a flow chart for context switching.
- Figure 4 depicts a flow chart for a driver call for setting processor performance level.
- Figure 5 depicts a flow chart for a driver call for converting unsigned integer map values into hardware specific performance values.
- the present invention consists of four interrelated components: (1) an operating system application program interface (API) that initially maps priority and performance; (2) code for setting a task's processor performance level based on the map; (3); altered context switching code to account for task performance level and (4) a performance control driver to convert the map into hardware specific settings and to set performance levels.
- API operating system application program interface
- Figure 1 shows a flow chart depicting the operation of the operating system API.
- the API would map task priority into performance setting based on an algorithm, either pre-defined or in the alternative a dynamic algorithm could be used.
- the API might contain an initial level and an increment. All task priorities below the initial level would receive the lowest possible performance level. Each progressively higher task priority would receive an incrementally higher performance level, up to the maximum possible performance level at which point all higher priority tasks would receive the maximum performance level. Alternatively, the highest task priority could receive the highest performance level, and all lower task priorities would be receive a correspondingly decremented performance level until the lowest possible performance level was reached.
- the API would first check to determine if a map is specified. In other words, the routine inquiries into whether the system has the information necessary to make a map. If not, the API returns an error. Next, the API would conduct an integrity check to ensure that if a map is specified that the map contains valid values. In other words, the API needs to determine if the existing task priority values and the performance levels are the correct values and levels for the particular system. If not, the API returns an error. At this point the API applies one of the foregoing algorithms to map the task priority settings to performance levels. Preferably, the API creates a map by calculating an unsigned integer value representing a performance level corresponding to each task priority. The API creates a map independent of the specific hardware components of the system.
- the performance values while meaningful in relation to each other, lack relevance to the specific system hardware.
- the API therefore, needs to call a performance control driver to convert the performance map into specific system settings corresponding to actual performance values. For example, if processor performance varies according to voltage the performance control driver would convert the unsigned integers into a voltage, or if the performance varies according to clock speed the performance control driver would convert each unsigned integer into a clock speed. In other words, the performance control driver converts the generic unsigned integers into specific performance values with meaning to the hardware components of the particular system.
- the step is merely added to the pre-existing routine used to calculate the specific performance control setting, since the system already can adjust performance. Next, the newly created map is stored to memory for later recall.
- the system could store the map with the relative unsigned integer performance values passing the generic performance values to the performance control driver each time that the system performs a context switch.
- the performance control driver would then need to convert the generic number into a device specific value for each context switch. If the performance control driver completes the device specific conversion initially, for example at boot time, then the performance control driver could be supplied with the device specific setting at each context switch, thereby saving time and processor resources.
- FIG. 2 depicts a flow chart for this portion of the invention.
- Each task has a task descriptor to store such things as the task's priority level. It is necessary to associate the task's priority level with a performance level and store that performance level in the task's descriptor. This routine must be performed initially to update the task descriptor to reflect the performance level with the task's priority, and thereafter each time a task's priority is changed to update the descriptor as necessary.
- a task's priority/performance level can change based on normal processing or in association with a dynamic algorithm for associating task priority with performance.
- task priority is not usually a fixed attribute of a task (discussed in greater detail hereinbelow).
- a change in task priority will require a change to the task descriptor performance level.
- the routine begins by determining whether the new task priority is equal to the old priority. If they are the same then nothing needs to change, otherwise the routine continues. To continue, the routine recovers the stored map and identifies the performance value associated with the new task priority. Then the task descriptor is updated with the new performance value. The routine must then determine if the actual processor performance level matches the new performance value. If not then the system forces a context switch in the manner described hereinbelow in reference to the third component of the invention.
- Figure 3 depicts a portion of the context switch flow chart.
- the system must call the performance control driver to change the current processor performance according to the value specified for the task about to become current.
- the preferred embodiment changes the processor performance as early as possible in the context switch.
- the first step involves retrieving the performance values from the task descriptor, preferably the task descriptor contains a device specific value.
- the context switching code could use the stored map to find the hardware specific performance level corresponding to the task's priority. This step would be accomplished by associating the task's priority level with performance level stored in the map.)
- the performance control driver is called and passed the performance values retrieved from the previous step.
- the performance control driver accomplishes the adjustment to the processor.
- the context switch code determines if the performance control driver succeed in changing the processor performance level. If not, then an e ⁇ or is logged. After the performance management step is complete, the context switch is completed as normal.
- Figures 4-5 depict flow charts for the two driver calls.
- Exemplary function prototypes would be of the following form: error_code Set_Performance (const perforrnance_val * const new_value) e ⁇ or_code Convert_Mapping_Table (unsigned int in_table[], performance_val * out_table[])
- the SetJPerformance function is used during context switching to actually change the processor performance level based on the specific hardware configuration of the relevant computer system. This begins by first determining if the new performance level is the same as the current performance level. If they are the same then the routine ends, and no change is necessary. Otherwise, the routine proceeds by checking the new value to determine if it is a valid setting. If the new value is not a valid supported setting the routine returns an error.
- the routine continues and actually performs the hardware manipulation to adjust the performance level to the new value. Again, the actual particulars of this adjustment will vary based on the power management provisions of the system. In some systems processor performance may vary through changes to the clock speed, on others through changes to voltage. Regardless of the exact form of the performance manipulation, the means for changing processor performance should already be known due to the fact that the processor/system is already designed for such manipulation.
- the routine verifies that the adjustment in processor performance succeeded. If not an error is returned. Otherwise, the new processor performance value is saved for subsequent use and the routine successfully terminates.
- the Convert_Mapping_Table function is used to initially convert the generic unsigned integer API map values into hardware specific performance values.
- the routine begins by executing a loop that checks the validity of all of the entries in the in_table to ensure that there are no unsupported values in the table.
- the in_table contains the unsigned integer values created during the initial mapping step.
- the routine returns an error if in_table contains any invalid entries. Otherwise, the routine allocates memory for each instance of the performance_val structure (one instance per entry in the in_table).
- the routine loops through the in_table and converts each unsigned integer into a hardware dependent value, and stores in each corresponding entry in out_table a pointer to a performance_val structure that stores a specific performance value corresponding to the generic performance value in corresponding entry in in_table.
- the present invention is primarily applicable to any multi-tasking software running on a system that supports a processor with variable performance capability. While the most suitable software matrix for the present invention is an extensible operating system, the invention can be applied to any software system that has a notion of multiple priorities. It should be understood that while the present invention is described in terms of systems that assign task priority in a manner wherein the highest numerical priority equates with the highest level of importance, the invention applies equally to systems that assign the highest level of importance to the lowest numerical priority. Additionally, while execution and control of the routines disclosed herein is discussed in reference to computer systems with an operating system, the invention is not limited; any computer system with some form of a runtime environment can utilize the routines disclosed herein.
- the most suitable system hardware is one that provides a reliable and efficient means for varying processor performance.
- the best example comprises a processor with architecturally defined mechanism for changing performance, like changing clock speed, which acts quickly and consistently.
- the present invention can also work with systems with performance-control hardware that, for example, alter processor performance through board-level manipulation of the processor or system voltage to affect processor performance. Performance control in these systems typically is less reliable and slower. To some degree this may limit the effectiveness of the present invention. For example, the longer it takes to adjust and stabilize the processor performance level, the less frequently processor performance should be adjusted. This drawback would not, however, render the invention useless.
- the changes to processor performance would not be strictly limited to changes in task priority.
- the present invention could be used to change processor performance in response to exception processing or in response to system interrupts.
- interrupts instead of just letting the system run at the current performance level, the mechanism of the present invention could easily be applied to raise performance levels in response to a system interrupt, or just in response to certain system interrupts.
- exceptions like invalid memory access, protection violation, integer overflow, or division by zero, the mechanism of the present invention could be easily adapted to raise performance levels to assist in more quickly responding to exceptions.
- Systems use various protocols to temporarily boost the priority of the lower priority task to minimize the wait time of the higher priority task. With the mechanism of the present invention these protocols would need to also determine if processor performance should change along with task priority.
- Another aspect of changing priority involves systems that implement aging mechanisms by which tasks that are ready to run are assigned a scheduling priority that gradually increases over time so that even low-priority tasks eventually get some processor time. In this case, the task will be scheduled according to the aged priority, but will run at its original lower priority.
- the system would need to choose between letting the performance level increase according to the increasing task priority, or locking the performance level while allowing the task priority to temporarily elevate based on age. Finally, some consideration should be given to the impact of the present invention on scheduling algorithms.
- RMA rate-monotonic analysis
- one of the primary motivations for varying processor performance relates to managing the performance versus power consumption trade-off, and the corresponding performance versus heat dissipation trade-off. While the present invention is applicable to the management of this problem, it is not so limited. The invention is applicable without significant alteration to manage any form of performance trade-off that is operative at run time, like, performance versus system lifetime, performance versus emission level, and the like.
- the present invention is particularly well suited for application to embedded real-time operations, although it is not limited thereto.
- embedded systems there is a premium on efficient use of system resources.
- Embedded computer systems usually operate in environments that limit the available amount of memory, processor power, and limit the size of the systems used.
- real-time systems must often perform a task in a specified period of time.
- the wide area of emerging consumer electronics products and Internet appliances often operate remotely and under battery power.
Abstract
Description
Claims
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US87359101A | 2001-06-04 | 2001-06-04 | |
US873591 | 2001-06-04 | ||
PCT/US2002/017309 WO2002099639A1 (en) | 2001-06-04 | 2002-06-03 | Method and apparatus to use task priority to scale processor performance |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1405171A1 EP1405171A1 (en) | 2004-04-07 |
EP1405171A4 true EP1405171A4 (en) | 2005-08-24 |
Family
ID=25361946
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP02732004A Withdrawn EP1405171A4 (en) | 2001-06-04 | 2002-06-03 | Method and apparatus to use task priority to scale processor performance |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1405171A4 (en) |
CA (1) | CA2469451A1 (en) |
WO (1) | WO2002099639A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7146511B2 (en) * | 2003-10-07 | 2006-12-05 | Hewlett-Packard Development Company, L.P. | Rack equipment application performance modification system and method |
CN1327349C (en) * | 2005-06-13 | 2007-07-18 | 浙江大学 | Task level resource administration method for micro-kernel embedded real-time operation systems |
CN100383742C (en) * | 2006-04-07 | 2008-04-23 | 浙江大学 | Implementation method for real-time task establishment in Java operating system |
US8102552B2 (en) | 2008-04-03 | 2012-01-24 | Sharp Laboratories Of America, Inc. | Performance monitoring and control of a multifunction printer |
US8392924B2 (en) | 2008-04-03 | 2013-03-05 | Sharp Laboratories Of America, Inc. | Custom scheduling and control of a multifunction printer |
US9547540B1 (en) | 2015-12-21 | 2017-01-17 | International Business Machines Corporation | Distributed operating system functions for nodes in a rack |
CN105573763B (en) * | 2015-12-23 | 2018-07-27 | 电子科技大学 | A kind of Embedded System Modeling method for supporting RTOS |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5117360A (en) * | 1990-03-28 | 1992-05-26 | Grumman Aerospace Corporation | Joint surveillance target attack radar system (JSTARS) |
US5542088A (en) * | 1994-04-29 | 1996-07-30 | Intergraph Corporation | Method and apparatus for enabling control of task execution |
US5623647A (en) * | 1995-03-07 | 1997-04-22 | Intel Corporation | Application specific clock throttling |
IL116708A (en) * | 1996-01-08 | 2000-12-06 | Smart Link Ltd | Real-time task manager for a personal computer |
US6272544B1 (en) * | 1998-09-08 | 2001-08-07 | Avaya Technology Corp | Dynamically assigning priorities for the allocation of server resources to completing classes of work based upon achievement of server level goals |
-
2002
- 2002-06-03 WO PCT/US2002/017309 patent/WO2002099639A1/en not_active Application Discontinuation
- 2002-06-03 CA CA002469451A patent/CA2469451A1/en not_active Abandoned
- 2002-06-03 EP EP02732004A patent/EP1405171A4/en not_active Withdrawn
Non-Patent Citations (4)
Title |
---|
GOVIL K ET AL: "COMPARING ALGORITHMS FOR DYNAMIC SPEED-SETTING OF A LOW-POWER CPU", PROCEEDINGS OF THE FIRST ANNUAL INT. CONF ON MOBILE COMPUTING AND NETWORKING, 27 June 1997 (1997-06-27), pages 1 - 13, XP002306321 * |
See also references of WO02099639A1 * |
SHIN YOUNGSOO ET AL: "Power conscious fixed priority scheduling for hard real-time systems", PROC DES AUTOM CONF; PROCEEDINGS - DESIGN AUTOMATION CONFERENCE 1999 IEEE, PISCATAWAY, NJ, USA, 1999, pages 134 - 139, XP002333591 * |
YAO F ET AL: "A scheduling model for reduced CPU energy", FOUNDATIONS OF COMPUTER SCIENCE, 1995. PROCEEDINGS., 36TH ANNUAL SYMPOSIUM ON MILWAUKEE, WI, USA 23-25 OCT. 1995, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 23 October 1995 (1995-10-23), pages 374 - 382, XP010166714, ISBN: 0-8186-7183-1 * |
Also Published As
Publication number | Publication date |
---|---|
CA2469451A1 (en) | 2002-12-12 |
EP1405171A1 (en) | 2004-04-07 |
WO2002099639A1 (en) | 2002-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6360243B1 (en) | Method, device and article of manufacture for implementing a real-time task scheduling accelerator | |
US6430593B1 (en) | Method, device and article of manufacture for efficient task scheduling in a multi-tasking preemptive priority-based real-time operating system | |
US7730340B2 (en) | Method and apparatus for dynamic voltage and frequency scaling | |
KR100864964B1 (en) | Arithmetic Processing System and Arithmetic Processing Control Method, Task Management System and Task Management Method, and Storage Medium | |
US7051219B2 (en) | System and apparatus for adjusting a clock speed based on a comparison between a time required for a scheduler function to be completed and a time required for an execution condition to be satisfied | |
US5517643A (en) | Method of allocating memory among a plurality of processes of a computer system | |
US6128672A (en) | Data transfer using software interrupt service routine between host processor and external device with queue of host processor and hardware queue pointers on external device | |
US6298448B1 (en) | Apparatus and method for automatic CPU speed control based on application-specific criteria | |
US9632822B2 (en) | Multi-core device and multi-thread scheduling method thereof | |
US7137115B2 (en) | Method for controlling multithreading | |
JP5311234B2 (en) | Computer system and its operation method | |
US7412590B2 (en) | Information processing apparatus and context switching method | |
JP4490298B2 (en) | Processor power control apparatus and processor power control method | |
US20050262365A1 (en) | P-state feedback to operating system with hardware coordination | |
Li et al. | Enhanced parallel application scheduling algorithm with energy consumption constraint in heterogeneous distributed systems | |
EP1426861A2 (en) | Resource management system in user-space | |
US20030177163A1 (en) | Microprocessor comprising load monitoring function | |
JPH07168726A (en) | Scheduling method for electronic computer and multiprocess operating system | |
US8997106B2 (en) | Method of using tickets and use cost values to permit usage of a device by a process | |
WO2002099639A1 (en) | Method and apparatus to use task priority to scale processor performance | |
JPWO2019215795A1 (en) | Information processing equipment, tuning method and tuning program | |
Zuberi et al. | EMERALDS-OSEK: a small real-time operating system for automotive control and monitoring | |
KR20090070071A (en) | Small low-power embedded system and preemption avoidance method thereof | |
Shin et al. | Embedded system design framework for minimizing code size and guaranteeing real-time requirements | |
KR20070092559A (en) | Apparatus and method for executing thread scheduling in virtual machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20031222 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK RO SI |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: 7G 06F 9/46 B Ipc: 7G 06F 9/00 A |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20050712 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: DIBBLE, PETER Inventor name: KAPLAN, KENNETH |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20100202 |