WO2006016283A2 - Programmation de taches au moyen d'une table de temps systeme de changement de contexte - Google Patents
Programmation de taches au moyen d'une table de temps systeme de changement de contexte Download PDFInfo
- Publication number
- WO2006016283A2 WO2006016283A2 PCT/IB2005/052320 IB2005052320W WO2006016283A2 WO 2006016283 A2 WO2006016283 A2 WO 2006016283A2 IB 2005052320 W IB2005052320 W IB 2005052320W WO 2006016283 A2 WO2006016283 A2 WO 2006016283A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- task
- time
- estimated
- context switch
- processing entity
- Prior art date
Links
- 238000012545 processing Methods 0.000 claims abstract description 80
- 230000000903 blocking effect Effects 0.000 claims abstract description 62
- 238000000034 method Methods 0.000 claims abstract description 42
- 238000004590 computer program Methods 0.000 claims description 17
- 230000002250 progressing effect Effects 0.000 claims description 11
- 238000004519 manufacturing process Methods 0.000 claims description 8
- 230000006854 communication Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000001419 dependent effect Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000000737 periodic effect Effects 0.000 description 4
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 241000238876 Acari Species 0.000 description 1
- 241000665848 Isca Species 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3836—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
- G06F9/3851—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming
Definitions
- the present invention relates to scheduling of tasks on processor entities and context switching.
- the first can be called the "trial and error" view.
- the second task can read data from the first task only if data has reached the second task from the first task.
- the budget cycle is for instance 100 000 cycles long. If the second task is about to read data from the first task and if data has not reached the second task, the second task will wait until data reaches it or until the budget cycle has expired. The second task tries to read, succeeds if data has reached but fails if data has not reached it. A considerable amount of time of the entire budget cycle may thus be spent as waiting time, since data might not be there or alternatively, the scheduled task being performed may have a duration that is considerably shorter than 100 000 cycles, meaning that the remaining time until 100 000 cycles have expired will be spent waiting.
- a context switch between a first task and a second task is made as soon as the first task is blocked on an input queue, because of data not being there, or alternatively, blocked on an output queue, because of a full data queue.
- This second view performs a context switch to another task, irrespective of when data may be available to the task and when data space may made available for output of the executing tasks.
- this object is achieved by a method for scheduling on a processing entity a first task and a second task, where the first task is executing on the processing entity but is blocked from making further progress, where the second task is ready to execute on the processing entity and intended to replace the first task, the method comprising the steps, obtaining an estimated context switch overhead time related to performing a context switch from the first task to the second task, determining an estimated blocking time for the first task, comparing the estimated blocking time and the estimated context switch overhead time, performing a context switch from the first task to the second task or continuing executing the first task on the processing entity, in dependence of the comparison between the estimated blocking time and the estimated context switch overhead time.
- a task scheduler for scheduling on a processing entity a first task and a second task, where the first task is executing on the processing entity but is blocked from making further progress, where the second task is ready to execute on the processing entity and intended to replace the first task, comprising a task data reading unit, arranged to receive task data, a blocking time unit, arranged to determine an estimated blocking time for the first task, a comparing unit, arranged to compare the estimated blocking time for the first task and an estimated context switch time from the first task to the second task, and a control unit connected to the task data reading unit, the blocking time unit and the comparing unit, said control unit being arranged to control the task scheduling and which control unit continues executing the first task on the processing entity or performs the context switch from the first task to the second task in dependence of the comparing unit.
- a computer program product comprising a computer readable medium, having thereon: computer program code means, to make a computer execute, when said computer program code means is loaded in the computer: obtaining an estimated context switch overhead time related to performing a context switch from a first task to a second task, within scheduling on a processing entity the first task and the second task, where the first task is executing on the processing entity but is blocked from making further progress, and where the second task is ready to execute on the processing entity and intended to replace the first task, determining an estimated blocking time for the first task, comparing the estimated blocking time and the estimated context switch overhead time, enabling performing the context switch from the first task to the second task or continuing executing the first task on the processing entity, in dependence of the comparison between the estimated blocking time and the estimated context switch overhead time.
- this object is achieved by a computer program element comprising computer program code means to make a computer execute: obtaining an estimated context switch overhead time related to performing a context switch from a first task to a second task, within scheduling on a processing entity the first task and the second task, where the first task is executing on the processing entity but is blocked from making further progress, and where the second task is ready to execute on the processing entity and intended to replace the first task, determining an estimated blocking time for the first task, comparing the estimated blocking time and the estimated context switch overhead time, enabling performing the context switch from the first task to the second task or continuing executing the first task on the processing entity, in dependence of the comparison between the estimated blocking time and the estimated context switch overhead time.
- the gist of this invention is to consider the time required to perform a context switch by using context switch overhead time in the scheduling of a first task and a second task on a processing entity.
- the present invention has the following overall advantages:
- One advantage of the present invention as compared to prior art is that the performance of processing entities is increased, due to that fewer processing cycles are required to perform said processing tasks.
- Another advantage of the present invention is that the computation speed when computing processing tasks is increased due to a decreased CPU-load.
- Claims 2 and 12 are directed toward performing a context switch if the estimated context switch overhead time is shorter than or equal to the estimated blocking time.
- Claims 3 and 11 are directed toward continuing executing the first task if the estimated context switch overhead time is longer than the estimated blocking time.
- Claims 5 and 13 are directed toward determining an expected data production time for the third task.
- An advantage of these claims is that the expected production time may also serve as a performance metrics for carrying out a system performance analysis.
- Claims 6 and 16 are directed toward determining an expected data consumption time for the fourth task.
- Fig. 1 shows a schematic representation of a task scheduler according to one embodiment of the present invention
- Fig. 2 shows one example of mapping of a task network on processing entities according to the present invention
- Fig. 3 presents a flow-chart of the method for task scheduling of a first and a second task on a processing entity according to one embodiment of the present invention
- Fig. 4 presents one example of a context switch overhead table according to the present invention.
- Fig. 5 shows a computer program product according to one embodiment of the present invention.
- the invention relates in general to a method and a device for task scheduling on processing entities, and in particular to a method and a device for scheduling on processing entities a first and a second task and context switching between the first and the second task on a processing entity.
- Scheduling of input/output queue based communication processes or tasks is of importance, especially on single task processing entities.
- Input/output queue based communication tasks are independent relative to one another and may be blocked on either their input or their output. Since the processing steps contained in the tasks may need different amounts of time to finish, an intelligent scheduling of these tasks increases the processing performance of the processing entities.
- Fig. 1 is a schematic representation of a task scheduler 100 according to one embodiment of the present invention.
- This task scheduler 100 can schedule one task for another task by using a method according to one embodiment of the present invention so as to enable improvement of the performance of a processing entity or a processor.
- This task scheduler 100 as presented in Fig. 1 comprises a task data read/write unit 102, a memory unit 104, a blocking time unit 106, a comparing unit 108, a control unit 110, a time counting unit 1 12 and a context switch overhead determining unit 114.
- the control unit 110 is connected to all other units, that is, connected to the task data read/write unit 102, the memory unit 104, the blocking time unit 106, the comparing unit 108, the time counting unit 112 and the context switch overhead determining unit 114.
- the task data read/write unit 102 is also connected to the context switch overhead determining unit 114 as well as the to the memory unit 104.
- the memory unit 104 is further connected to the comparing unit 108, the blocking time unit 106 and the context switch overhead determining unit 114.
- the blocking time unit 106 is yet further connected to the comparing unit 108.
- the task data read/write unit 102 is arranged to read and write task data, to and from, respectively, lists, tables, data sets etc. According to a preferred embodiment of the present invention the task data read/write unit reads task data from task data lists and writes task data to task data lists. The task data read/write unit is also arranged to store task data in the memory unit 104.
- the blocking time unit 106 is provided to determine an estimate of how long time a specific task is expected to be blocked. Details regarding how this is done are described down below.
- the context switch overhead determining unit 114 is arranged to calculate the estimated time that is required to perform a context switch from one task to another task.
- the comparing unit 108 is arranged to compare the estimated blocking time and the estimated context switch overhead time.
- the time counting unit 112 is provided to count time duration. According to one embodiment of the present invention a time duration value is decremented until the time has expired, meaning the time value has become zero. Details on how the scheduling of tasks on a processing entity is performed are also described below.
- FIG. 2 One example of a task network is presented in Fig. 2, showing the task network in which the tasks are mapped on the respective processor.
- the ovals 202, 204, 206, 208 and 210 represent the first, second, third, fourth and fifth tasks or processes, respectively, which are connected to each other by first-in- first-out (FIFO) registers forming the task network.
- the FIFO register 212, or data transport channel, connects the second task 204 and the fourth task 208, and the FIFO register 214 connects the fourth task 208 and the fifth task 210.
- Fig. 2 also shows on which processors the tasks may be scheduled, that is, on which processor the tasks are mapped, as mentioned above.
- the first task 202 and the fourth task 208 are mapped on the first processor 216, whereas the second task 204 and the third task 206 are mapped the second processor 218.
- the fifth task 210 is moreover mapped on the third processor 220.
- the task network in Fig. 2 contains five tasks 202, 204, 206, 208 and 210 but in practice a greater number of tasks may be running on multiple processors.
- both the first task 202 and as well as the fourth task 208 are mapped on the first processor 216.
- the processor is of a single task processor type which means that only one task may be executing on the processor at a time. It is therefore important to determine which task to choose and when to choose this specific task for processing. There are several ways in which the tasks can be scheduled.
- the present invention relates to scheduling of a first and a second task on a processing entity.
- Fig. 3 showing a flow-chart of the method according to one embodiment of the present invention, the method for scheduling on a processing entity a first task and a second task, is now explained.
- the method applies to a situation in which a first task is executing on a processor, but is blocked from making further progress and a second task is ready to execute on the same processor and intended to replace the first task.
- this situation for instance corresponds to that the fourth task 208 is blocked from making further progress on the first processor 216 and that the first task 202 is ready to execute on the same first processor.
- the method following the flowchart in Fig. 3 starts by the step 302, determining the estimated context switch overhead time.
- This step implies determining the estimated overhead time for switching context or task for at least two tasks that are mapped on the processor that is executing the blocked task.
- the estimated overhead time for switching context or task is determined for all tasks that are mapped on the pertinent processor.
- determining the estimated context switch overhead time means determining the estimated overhead time that is required to switch from executing the fourth task 208 to executing the first task 202, since the first task 202 is the only task besides the blocked fourth task 208 that is mapped on the same first processor 216.
- the context switch overhead time is in principle dependent on how far the tasks have progressed that is the processing state reached by the task, and the number of said states to be stored for the task that may be stopped executing and the number of states to restore for the task that may be started executing.
- the communication rate is also dependent of the status of a cache memory that may be used and the bandwidth of the data channel through which the data is communicated. An increase in the probability to satisfy a request to read from the memory cache, without the need to use a main memory (that is the cache hit probability) and an increase in the communication bandwidth hence decrease the context switch overhead time. As was described in connection to Fig.
- the context switch overhead determining unit 114 when presenting the task scheduler according to one embodiment of the present invention the context switch overhead determining unit 114 is arranged to determine the estimated context switch overhead times. According to one embodiment of the present invention the context switch overhead determining unit creates a context switch overhead table containing the estimated context switch overhead times that are required to switch from one task to another task, among tasks that are mapped on the same processor.
- a context switch overhead table may schematically look like the table as presented in Fig. 4, for a context switch between task A to task B.
- the horizontal numbers arc for instance the identity of the A tasks and the vertical numbers are for instance the identities of the B tasks.
- the table is symmetric. For this reason only one half of the AxB matrix is shown in Fig. 4. Also, switches between A and A, and B and B, are nonsense and are therefore not represented in the table.
- the required context switch overhead table will not be symmetric requiring the complete AxB matrix to include all possible context switch overhead times.
- Such a non-symmetric matrix may be the case for binary compatible processors in a multiprocessor system. Assuming a symmetric AxB matrix, for a network containing n processes mapped on one processor, (n 2 -n)/2 context switch overhead time entries are thus required to cover all possible context switches which make sense.
- the context switch overhead table is computed in simulations off-line in beforehand and obtained in form of pertinent estimated context switch overhead times. Time values calculated and tables created in beforehand may be provided, for example, by a provided memory unit.
- the estimated context switch overhead times may be estimated off ⁇ line.
- the estimated context switch overhead times may be estimated on-line.
- the step of determining the estimated context switch overhead time in step 302, in the method is followed by the step of determining the estimated blocking time, step 304. With reference to Fig. 2 the determination of the estimated blocking time will now be explained.
- the input/output queue based tasks may be blocked on either their input or their output.
- the fourth task 208 may be blocked on its input by the FIFO register 212 being empty on its output. If there is no data on the output of the FIFO register 212, the fourth task 208 cannot progress in the execution. As data is not available on the output of the FIFO register 212, new data has to be produced on the input of the FIFO register 212. The issue is now how much time is required until new data is produced by the second task 204, in order to supply the FIFO register 212 with data on its input and to make data available to the fourth task 208 on the output of the FIFO register 212.
- each FIFO register is provided with one input counting unit and one output counting unit. These input and output counting units are provided by the time counting unit 112, of the task scheduler.
- the input counting unit is an estimated production time (EPT) counter for data to be communicated in the FIFO register and the output counting unit is an estimated consumption (ECT) time counter for data to be communicated in the FIFO register.
- EPT estimated production time
- ECT estimated consumption
- the EPT counter at the input of the FIFO register 214 is set to an appropriate time value based on the current status of the fourth task 208.
- the EPT counter of the FIFO register 214 will be set to its maximum binary value, all 1 's.
- the 'appropriate time value' of the EPT counter of the FIFO register 214 is an appropriate number of CPU cycles needed until the fourth task 208 starts to produce data on the FIFO register 214.
- This number is loaded 1) whenever a task is very close to produce the necessary data say, 10-20 cycles away or 2) at the beginning of the processing step or 3) at any other suitable instant which ensures timing correctness.
- This loading of the number also enables the counters. This means that the EPT counter is started decrementing automatically for every tick of CPU clock.
- the ECT counter of FIFO register 214 will be loaded appropriately inline with the abovementioned description based on the status of task 210.
- the ECT counter of the FIFO register 214 is disabled, whenever the fifth task 210 is not executing on processing element 220.
- the EPT counter of the FIFO register 214 is disabled, whenever the fourth task 208 is not executing on processing element 216.
- the ECT and EPT counters can be either enabled or disabled. When enabled, they auto decrement per each clock cycle, whereas when being disabled, the contents of the counter are irrelevant and not decremented. When enabled, the counter automatically is decremented till zero, after which it stops decrementing and remains being enabled.
- a FIFO register When a FIFO register is empty on its output, thus preventing progress of a task, it is thus estimated when data can be produced by the adjacent task that is connected to the blocked task by the blocking FIFO register, that is the EPT counter value as described above is estimated.
- the adjacent task which is to be investigated is the second task 204. Once this second task 204 starts producing data on its output, data becomes available on the output of the FIFO register 212. As soon as data is made available on the output of a FIFO, the task to which the output of the FIFO is connected starts progressing if the task is executing and if the task is not blocked on its output.
- the time that is required until the second task 204 can start producing data is determining the blocking of the fourth task 208.
- the time until the second task 204 can start producing data is therefore set to the EPT counter of the FIFO register 212.
- This time is hence also the estimated blocking time for the fourth task 208, since this fourth task 208 can start progressing after this time because data will be available on the output of the FIFO register 212. In case the fourth task 208 is blocked from progressing due to the input of the
- the fifth task 210 is now investigated and analyzed to determine when this task may start consuming data.
- the ECT counter of the FIFO register 214 is set to the expected consumption time driven by the status of task 210, that is the time until the fifth task starts consuming data.
- it is this ECT time estimate that is of relevance to the blocking of the fourth task 208, in this case of the input of the FIFO register 214 being blocked, since the FIFO register 214 will be unblocked when the ECT time has expired.
- a comparison is made between the estimated context switch overhead time and the estimated blocking time, as was determined in step 304, in step 306.
- This comparison is performed by the comparing unit 108 that is connected to the blocking time unit 106, having access to the estimated blocking time, and the memory unit 104, that has access to the estimated context switch overhead time via the context switch overhead determining unit 114.
- the control unit 110 obtains data that the Estimated Context Switch Overhead time, ECSOT, is shorter than or equal to the Estimated Blocking Time, EBT, from the comparison performed by the comparing unit 108, the control unit 110 decides to perform a context switch to the second task, step 308.
- this scenario corresponds to a situation in which the estimated context switch overhead time from the fourth task 208 to the first task 202 is shorter than or equal to the estimated blocking time, EBT of the fourth task 208.
- the current or intermediate states that are reached by the fourth task 208 are stored in a memory 104 and the state values of the first task 202 are restored from the memory 104 so that this first task 202 can be restarted with relevant data.
- a task may thus be restarted from an intermediate state or states after restoring said state data. There is thus no need to restart the task from the beginning of said task.
- control unit 110 determines to continue executing of the blocked fourth task 208, despite the fourth task 208 being blocked from progressing by the blocked FIFO register 212.
- This scenario corresponds to the step 310 in the method according to an embodiment of the present invention. As performing a context switch would take more time or be more costly than the time expected to be required until data is made available on the output of the FIFO register 212, and the fourth task 208 can be continued progressing, the blocked task is continued executing.
- the fourth task 208 is executing and therefore active, but not enabled since it cannot progress.
- the estimate blocking time for the fourth task 208 is hence set to the estimated production time of the second task 204, as described above.
- the current value of EPT counter of the FIFO register 212 indicates the expectancy time until data is available.
- the method according to the present invention comprises the step of counting expectancy time, step 312. This implies continuing counting or decrementing the current EPT counter.
- the EPT counter is decrementing its value as time elapses. In an alternative, the EPT counter increments a corresponding value until this value reached the set value.
- the second task 204 starts producing data and data becomes available on the output of the FIFO register 212.
- the fourth task 208 is in its active state awaiting data, the task starts progressing, that is, it is enabled, when data becomes available on the output of the FIFO register 212. In the method according to one embodiment of the present invention, this step is step 314, starting progressing first task when expectancy time has expired. The fourth task 208 is obviously unblocked.
- the blocking time is dependent on the time until data is consumed by the fifth task 210, that is the consumption time of the fifth task 210. If the control unit 110 decides to continue executing a blocked task, because it is estimated that it takes more time or is more costly to perform a context switch, step 310 in the method, the current value of ECT-counter of the FIFO register 214 indicates the time that is required until the fifth task 210 starts consuming data. When this time has expired by the ECT-counter, data space is made available on the input of the FIFO register, the fourth task 208 is unblocked and is enabled and therefore starts progressing.
- the described method of the present invention may also be performed by loading a computer program product, as shown in Fig. 5, having thereon computer program code means to make a computer execute the method when said computer program means is loaded in a computer.
- This computer typically comprises a control unit, a memory unit and an input/output unit.
- the computer program product as shown in Fig. 5 may be a CD-ROM, a DVD-disc, an MD-disc or any other kind of computer program product.
- the computer program product may be a portable memory, such as a flash-based memory.
- time is progressed, incremented or decremented, in terms cycle units or cycles. Since each cycle refers to a small time duration, control cycles or other confirmation cycles, may be incorporated between various specified moments or events, to certify the function of the method and the task scheduler according to the present invention, without deviating from the invention.
- the task scheduler of the present invention may moreover be implemented in hardware (HW) and/or software (SW).
- HW hardware
- SW software
- the task scheduler is a multiprocessor system, such as digital signal processing (DSP) system.
- DSP digital signal processing
- At least the EPT- and ECT- counters of the task scheduler are implemented in hardware.
- the method for scheduling a first and a second task on a processing entity and the task scheduler according to the present invention have the following advantages:
- the method and the task scheduler according to the present invention have the advantage, as compared to the prior art, of providing an increased performance of the processing entities. This is due to the fact that fewer processing cycles are required to perform the processing tasks to be computed.
- the method and the task scheduler according to the present invention provides an increased computation speed when computing processing tasks. This is thanks to the decreased CPU-load of the processing entities when processing these processing tasks, as compared to methods and devices of known prior art. It shall be paid attention to that:
- a or “an” does not exclude a plurality of the respective items.
- a single processor or other processing unit may fulfill the functions of several units recited in the claims.
- the task scheduler as presented in Fig. 1 may moreover be designed in many different ways.
- One example being that the memory unit 104 is incorporated in the control unit 110.
- Another example is that the blocking time unit 106 is incorporated in the control unit 110.
- Various other designs of the task scheduler by for instance designing the task scheduler as comprised of a yet different number of units, may be envisaged without deviating from the scope of this invention.
- the number of the connections and the way the different units are connected to each other, respectively may be designed differently, still without departing from the scope of protection of this invention.
- the step of determining an estimated context switch overhead time, step 302 comprises determining an estimated context switch overhead time including a user-specified delay time.
- the user-specified time is a delay, upon which a user-specified criterion is based, when the estimated context switch overhead time and the estimated blocking time are compared by the comparing unit 108.
- One motivation for including a user-specified time delay in the estimated context switch overhead time is that a reduced power consumption is achieved at the expense of a small acceptable delay. This delay typically has the length of less than a few tens of CPU cycles or clock ticks.
- the relative order of various steps within the method as presented in Fig. 3 may be changed without deviating from the scope of the present invention.
- the relative order of the step of determining the estimated context switch overhead time, step 302, and the step of determining the estimated blocking time, step 304 can be reversed with the consequence that step 304 precedes step 302 in Fig. 3.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Debugging And Monitoring (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04103793 | 2004-08-06 | ||
EP04103793.8 | 2004-08-06 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2006016283A2 true WO2006016283A2 (fr) | 2006-02-16 |
WO2006016283A3 WO2006016283A3 (fr) | 2006-10-19 |
Family
ID=35034672
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2005/052320 WO2006016283A2 (fr) | 2004-08-06 | 2005-07-13 | Programmation de taches au moyen d'une table de temps systeme de changement de contexte |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2006016283A2 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101353065B1 (ko) | 2011-11-11 | 2014-01-20 | 재단법인대구경북과학기술원 | 실시간 운영체제의 테스크 스케줄링 테이블을 이용한 테스크 관리장치 및 관리방법 |
WO2023200636A1 (fr) * | 2022-04-11 | 2023-10-19 | Snap Inc. | Système de préemption intelligent |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002071218A2 (fr) * | 2001-03-05 | 2002-09-12 | Koninklijke Philips Electronics N.V. | Procede et systeme servant a retirer un budget d'une tache bloquante |
WO2003052597A2 (fr) * | 2001-12-14 | 2003-06-26 | Koninklijke Philips Electronics N.V. | Systeme de traitement de donnees a processeurs multiples, allocateur de ressources destine a un systeme de traitement de donnees a processeurs multiples et procede d'ordonnancement des taches correspondant |
US6687770B1 (en) * | 1999-03-08 | 2004-02-03 | Sigma Designs, Inc. | Controlling consumption of time-stamped information by a buffered system |
-
2005
- 2005-07-13 WO PCT/IB2005/052320 patent/WO2006016283A2/fr active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6687770B1 (en) * | 1999-03-08 | 2004-02-03 | Sigma Designs, Inc. | Controlling consumption of time-stamped information by a buffered system |
WO2002071218A2 (fr) * | 2001-03-05 | 2002-09-12 | Koninklijke Philips Electronics N.V. | Procede et systeme servant a retirer un budget d'une tache bloquante |
WO2003052597A2 (fr) * | 2001-12-14 | 2003-06-26 | Koninklijke Philips Electronics N.V. | Systeme de traitement de donnees a processeurs multiples, allocateur de ressources destine a un systeme de traitement de donnees a processeurs multiples et procede d'ordonnancement des taches correspondant |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101353065B1 (ko) | 2011-11-11 | 2014-01-20 | 재단법인대구경북과학기술원 | 실시간 운영체제의 테스크 스케줄링 테이블을 이용한 테스크 관리장치 및 관리방법 |
WO2023200636A1 (fr) * | 2022-04-11 | 2023-10-19 | Snap Inc. | Système de préemption intelligent |
Also Published As
Publication number | Publication date |
---|---|
WO2006016283A3 (fr) | 2006-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8959515B2 (en) | Task scheduling policy for limited memory systems | |
CN100517215C (zh) | 用于无线系统中定时及事件处理的方法和装置 | |
US8181047B2 (en) | Apparatus and method for controlling power management by comparing tick idle time data to power management state resume time data | |
JP4693326B2 (ja) | 組込み型プロセッサにおいてゼロタイムコンテクストスイッチを用いて命令レベルをマルチスレッド化するシステムおよび方法 | |
US8856791B2 (en) | Method and system for operating in hard real time | |
US20150347186A1 (en) | Method and system for scheduling repetitive tasks in o(1) | |
JP2004532444A (ja) | マルチスレッドプロセッサ上の優先順位及び命令速度の制御 | |
Perkovic et al. | Randomization, speculation, and adaptation in batch schedulers | |
JP2002215599A (ja) | マルチプロセッサシステムおよびその制御方法 | |
CN101366012A (zh) | 用于多处理器系统中的中断分配的方法和系统 | |
JP2008165798A (ja) | データ処理装置におけるプロセッサの性能管理 | |
US7360216B2 (en) | Method and system for real-time multitasking | |
US20100050184A1 (en) | Multitasking processor and task switching method thereof | |
US20110004883A1 (en) | Method and System for Job Scheduling | |
JP2004110795A (ja) | 二層マルチスレッド化構造で最適パフォーマンスのためにスレッドの置き換えを実施する方法および装置 | |
US8640133B2 (en) | Equal duration and equal fetch operations sub-context switch interval based fetch operation scheduling utilizing fetch error rate based logic for switching between plurality of sorting algorithms | |
JPWO2009150815A1 (ja) | マルチプロセッサシステム | |
US20060037021A1 (en) | System, apparatus and method of adaptively queueing processes for execution scheduling | |
WO2012113232A1 (fr) | Procédé et dispositif d'ajustement de cycle d'interruption d'horloge | |
JP4170364B2 (ja) | プロセッサ | |
US20060146864A1 (en) | Flexible use of compute allocation in a multi-threaded compute engines | |
Kohútka et al. | ASIC architecture and implementation of RED scheduler for mixed-criticality real-time systems | |
JP4482275B2 (ja) | オペレーティングシステムサポートのために一定の時間基準を用いるマルチモード電力管理システムのハードウェアアーキテクチャ | |
US7562364B2 (en) | Adaptive queue scheduling | |
WO2006016283A2 (fr) | Programmation de taches au moyen d'une table de temps systeme de changement de contexte |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |