WO2012069830A1 - A method and system for identifying the end of a task and for notifying a hardware scheduler thereof - Google Patents

A method and system for identifying the end of a task and for notifying a hardware scheduler thereof Download PDF

Info

Publication number
WO2012069830A1
WO2012069830A1 PCT/GB2011/052302 GB2011052302W WO2012069830A1 WO 2012069830 A1 WO2012069830 A1 WO 2012069830A1 GB 2011052302 W GB2011052302 W GB 2011052302W WO 2012069830 A1 WO2012069830 A1 WO 2012069830A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
processor
instruction
scheduler
return
Prior art date
Application number
PCT/GB2011/052302
Other languages
French (fr)
Inventor
Zemian Hughes
Original Assignee
Tte Systems Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB1019890.1A external-priority patent/GB201019890D0/en
Priority claimed from GBGB1019895.0A external-priority patent/GB201019895D0/en
Application filed by Tte Systems Ltd filed Critical Tte Systems Ltd
Publication of WO2012069830A1 publication Critical patent/WO2012069830A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30076Arrangements for executing specific machine instructions to perform miscellaneous control operations, e.g. NOP
    • G06F9/3009Thread control instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/543Local

Definitions

  • the present invention relates to a method and system for effectively identifying the end of a task, operating on a processor or the like. Background of the invention
  • Processors are used in all industry sectors to run processes, systems, operations, etc. Processors carry out a plurality of successive and/or simultaneous tasks.
  • processors are often formed on the basis of an event triggered architecture.
  • a conventional microprocessor based system has a processor that includes a number of interrupt lines. These interrupt lines allow a process which is currently being executed on the processor to be interrupted by an event. As a result, a given process can be interrupted at any point in time.
  • To comprehensively model a conventional "event triggered" system the effect of every interrupt occurring at every point during every process must be tested. This is a huge number of possible variables even for simple systems. For complex systems, the modeling becomes very difficult and often is impractical due to the amount of processing required. As a result, the stability of such an event triggered system cannot be guaranteed in all situations.
  • An event trigger system creates event trigger interrupts.
  • a “time-triggered” system (a TT system) is generally composed of a single periodic interrupt which is often driven by a timer. As the name suggests TT interrupts occur at known (predetermined) points in time. A “tick interval” is used to describe the duration between the predetermined points in time where one or more tasks may execute.
  • a TT system offers a higher degree of predictability compared to an event triggered system as it is possible to model the behavior of a TT system in virtually all relevant situations.
  • the model of the TT system is a much easier process than that for an event triggered system, even for highly complex systems.
  • the role of the scheduler becomes very important as this is the entity that determines the order in which tasks run on the processor. Without a scheduler there would be no control over the progression and sequence of the tasks.
  • Figure 1 shows a software scheduler or a realtime operating system (RTOS) solution that incorporates an inter-task overhead.
  • the scheduler overhead periods are identified as 100 in figure 1 .
  • Another proposed method relates to an end of task instruction.
  • the end of task instruction may be manually inserted or brought about by modifying a compiler.
  • the end of task instruction is used to indicate that a task has completed and can help to achieve a reduction of inter-task overhead.
  • a hardware or software scheduler may be used to achieve this.
  • This unique instruction can also be known as an 'endtask' instruction and must be added to the processors instruction set architecture (ISA).
  • ISA processors instruction set architecture
  • an assembly wrapper may be required if a compiler is unable to automatically insert an endtask instruction.
  • an assembly wrapper for inserting an endtask instruction may be:
  • FIG. 2 illustrates a known example for using the endtask instruction 200 with a hardware scheduler 202.
  • an implementation may begin a task execution by passing a task vector address along with a type of interrupt signal. The task then executes on the processor and when the unique endtask enters the processor the first instruction of the next task is immediately retrieved. It is not until the endtask instruction reaches a point within the processor where it 'cannot' generate 'any' further exceptions that the hardware scheduler is finally notified of the task change. In a pipeline processor as shown in figure 3, this may be at the end of the execution (EX) stage 300. The endtask instruction has essentially been manually inserted and is recognized at the EX stage. This method is a key requirement for systems that need to maintain precise exceptions as a task does not finish until it has made all its state changes.
  • a pre-end-of-task is identified in the instruction fetch (IF) part 302 of the processor in order to begin loading the first instructions of the next task.
  • the new task begins execution once the previous tasks instructions have finished making state changes in the processor, only then can the new task be considered as beginning execution. This is an example for maintaining precise exceptions in a pipelined processor.
  • FIG. 4 An example of the endtask solution when using a processor that has three flushed pipeline stages is illustrated in Figure 4 showing overheads (endtasks) 400.
  • WCET Worst Case Execution Times
  • a software application is made up of a series of tasks or functions that are scheduled to run at specific points in time. In a co-operative scheduler, tasks are run to completion before the next task is executed. A problem arises when there are several tasks that need to run in a specific interval, and some or all of them have a variable execution time. This uncertainty in how long a task will take can lead to jitter.
  • This technique is inaccurate and, at best, it may reduce some jitter, but does not completely eliminate it. Moreover, the sandwich delay has a high power consumption, and will not be suitable for low-power devices. This technique also relies on code instrumentation that not only increases the code and data sizes, but can potentially introduce software bugs into the system.
  • the present invention attempts to address at least some of the problems identified above by means of a novel and inventive scheme for operating and scheduling a TT system.
  • the present invention provides a method and system as set out in the accompanying claims.
  • a system for indentifying the end of a task running there on wherein the system includes a scheduler for use with a time-triggered pipeline processor to control the passage of one or more tasks through one or more stages of the processor, the processor further being adapted to generate, overload and send a return-to-caller instruction to the scheduler when a first task reaches conclusion, wherein the scheduler receives the overloaded return-to-caller instruction to conclude that the first task in progress has ended and activates a second task to be read into the processor.
  • the system further comprises the processor wherein the processor comprises a plurality of stages for carrying out different functions, the processor being adapted to pass each task sequentially from one stage to the next under the control of the scheduler;
  • the processor comprises a fetch stage where the first instruction of a task is read; a decode state for decoding the first instruction; an execute stage which processes the decoded first instruction; and memory stage for accessing a memory; and a write stage where the processed first instruction is written to a registry file.
  • the return-to-caller instruction is created in a first stage of the processer and sent to the scheduler from a second later stage of the processor.
  • the return -to-caller instruction is created in a first stage of the processer and sent to the scheduler from the same first stage of the processor.
  • the return to caller instruction is created and sent before the first task has passed through all stages of the processor, the second task commences before the first task has passed through all stages of the processor and the two tasks run back to back.
  • the scheduler further comprises a guardian mechanism for measuring the time taken for a task to be completed.
  • the time taken for a task to complete is compared with a predetermined value.
  • the guardian mechanism includes a plurality of memory stages and timer module, and wherein at least one of the plurality of memory modules generate said predetermined time for a certain task to be completed and communicate this with the timer to enable the timer to compare the time taken for a task to complete with the predetermined time.
  • At least one of the memory stages is adapted to generate a backup task and if required, the timer selects the backup task to replace the currently running task.
  • a post execution idle is introduced at the end of a completed task to ensure that all tasks are completed within the same time period.
  • a method for indentifying the end of a task running on a system wherein the system includes a scheduler for use with a time-triggered cooperative pipeline processor to control the passage of one or more tasks through the processor, wherein the processor comprises a plurality of stages for carrying out different functions, the processor being adapted to pass each task sequentially from one stage to the next under the control of the scheduler; the processor further being adapted to generate, overload and send a return-to-caller instruction to the scheduler when a first task reaches conclusion, wherein the method comprises receiving via the scheduler the overloaded return -to-caller instruction to conclude that the first task in progress has ended and activating a second task to be read into the processor.
  • the method comprises carrying out steps on the processor.
  • the method further comprises via a processor: fetching the first instruction of a task is read; decoding the first instruction; processing the decoded first instruction; accessing a memory; and writing the processed first instruction to a register file.
  • the method further comprises creating the return-to-caller instruction in a first stage of the processer and sending the return -to-caller instruction to the scheduler from a second later stage of the processor.
  • the method further comprises creating the return-to-caller instruction in a first stage of the processer and sending the return -to-caller instruction to the scheduler from the same first stage of the processor.
  • creating and sending the return-to-caller instruction before the first task has passed through all stages of the processor causes the second task to commence before the first task has passed through all stages of the processor thereby running the two tasks back to back.
  • the method further comprises measuring the time taken for a task to be completed via a guardian mechanism.
  • the method further comprises comparing the time taken for a task to complete with a predetermined value.
  • the guardian mechanism includes a plurality of memory stages and timer module, and wherein the method comprises generating said predetermined time for a certain task to be completed and communicating this with the timer, and wherein the method further comprises comparing the time taken for a task to complete with the predetermined time.
  • the method further comprises generating a backup task via at least one of the memory stages and if required, selecting the backup task to replace the currently running task.
  • the method further comprises introducing a post execution idle at the end of a completed task to ensure that all tasks are completed within the same time period.
  • Figure 1 is a diagram showing a schedule overhead, in accordance with the prior art
  • Figure 2 is a diagram showing an example of using the end task instruction with a hardware schedule of the prior art
  • Figure 3 is a block diagram of a pipeline system for maintaining precise exceptions, in accordance with the prior art
  • Figure 4 is a diagram showing a three pipeline scheme showing
  • Figure 5 is a diagram of a pipeline scheme, in accordance with an embodiment of the present invention.
  • Figure 6 is a diagram of the pipeline scheme showing the effect of overloading, in accordance with an embodiment of the present invention.
  • Figure 7 is a diagram illustrating the effect of an end task instruction, in accordance with an embodiment of the present invention.
  • Figure 8 is diagram showing back-to-back task execution, in accordance with an embodiment of the present invention.
  • Figure 9 is a diagram showing the execution of a backup task used in a task guardian, in accordance with an embodiment of the present invention.
  • Figure 10 is a block diagram of a task guardian, in accordance with an embodiment of the present invention.
  • Figure 1 1 is a first timing graph, in accordance with an embodiment of the present invention.
  • Figure 12 is a second timing graph, in accordance with an embodiment of the invention.
  • FIG. 13 is a third timing graph, in accordance with an embodiment of the present invention.
  • Figure 14 is a fourth timing graph, in accordance with an embodiment of the present invention.
  • Figure 15 is a fifth timing graph, in accordance with an embodiment of the present invention.
  • Figure 16 is a sixth timing graph, in accordance with an embodiment of the present invention.
  • Figure 17 is a flow chart of a task guardian mechanism, in accordance with an embodiment of the present invention.
  • Figure 18 is a timing diagram showing the effects of jitter, in accordance with an embodiment of the present invention.
  • Figure 19 is a timing diagram showing the execution of a post-execution idle slot, in accordance with an embodiment of the present invention.
  • Figure 20 is a seventh timing graph, in accordance with an embodiment of the present invention.
  • FIG. 21 is an eighth timing graph, in accordance with an embodiment of the present invention.
  • Figure 22 is a ninth timing graph showing task shutdown, in accordance with an embodiment of the present invention.
  • Figure 23 is a tenth timing graph, in accordance with an embodiment of the present invention.
  • the present invention relates to a design of a time-triggered hardware scheduler specifically for use with processors employing time-triggered software.
  • One aim of the invention is to implement a means for a processor to reliably identify the end of a task. This is intended to improve the predictability of how the system will behave when executing a sequence of tasks.
  • the invention also serves to reduce the central processing unit (CPU) overheads, eliminate task jitter and increase task reliability, particularly in the event of an error. Identifying the end of a task in this way makes it possible to implement advance techniques such as back-to-back task execution, dealing with overrunning tasks and predictable post execution idle slots for low task jitter.
  • CPU central processing unit
  • the invention makes use of a concept of recognition that a "return-to- caller" instruction encountered during the execution of a task, in a TT system at least means that a task has been completed and control can be handed back to the scheduler for further action.
  • Such a message ensures that the scheduler does not assume that the task is going to take a fixed (worst case) time to execute and can thus schedule the "time-line" of tasks in a more flexible manner. This ensures that the scheduler optimizes the operation of the process and avoids unnecessary delays and/or long term idleness of the processor.
  • the invention in fact "overloads" the return to caller instruction and the end of a task is recognized without the need for manual modification of the software.
  • Identifying the end of a task in this way makes it easier to implement back- to-back scheduling with reduced task overhead (i.e. reduced redundant time between tasks).
  • the present invention is also useful for identifying errors. For example, if the return-to-caller instruction is not detected within a certain time period an error must have occurred. This is described in greater details below.
  • the ability to identify the end of a task in this way means the processor can be put into an idle mode thereby improving overall efficiency of the system.
  • a number of tasks to be carried out by a processor are controlled by a scheduler.
  • a key feature of run-to-completion tasks is that the compiler can be guaranteed to always use some form of return-to- caller instruction at all exit points of the task.
  • a run-to-completion task is one which must be complete before another instance of that task can start.
  • the return-to-caller instruction is always inserted by the compiler into the code relating to the task.
  • the present invention makes use of the return- to-caller instruction in a new way.
  • the overhead arising at the end of tasks in a hardware scheduler can be eliminated by overloading the return-to-caller instruction to mark the end of a task. This also eliminates the need for an end of task instruction and a low-level assembly wrapper.
  • the following methodology explains the implementation of the overloading of the return-to-caller instruction in order to indicate an end to a task.
  • TTC time triggered cooperative
  • the processor register which is used to store the return address from a function call is loaded. This register is also referred to as the "Return Address Register” or RAR.
  • RAR is given a code value which typically cannot represent a valid function address. The value is referred to herein as the “Task Return Value” (TRV).
  • TRV Task Return Value
  • the processor On return of any function call, the processor will read the value stored in the RAR. If this value is not equal to the TRV, the return from the function will be processed as usual. However, if the value stored in the RAR is equal to the TRV, then the hardware scheduler is notified that a task has terminated. Operating in this way gives rise to a reduction in overhead, which can be particularly significant when the system is required to execute large numbers of short tasks, such as may take place in aerospace systems, for example.
  • the present invention makes it possible to eliminate inter-task overheads for run to completion tasks and provides an effective means for identifying the end of a task.
  • the automatic identification of the end of a task as described above can be used to execute back-to-back tasks when a hardware scheduler is operating.
  • the invention ensures that it is possible to ascertain if a task has completed on time; and further to place the processor in idle mode as part of a jitter reduction mechanism.
  • TTC time-triggered co-operative
  • the unique method proceeds as follows.
  • the processor will read the value stored in the RAR on return of any function call. If the RAR value is not equal to the TRV, the return from the function will be processed as usual, as before. However: [i] the value stored in the RAR is equal to the TRV, and [ii] there is another task due to run immediately, then the program counter will not be set to the value in the TRV. Instead, the program counter value is set or the processor begins execution from the starting address of the next task which is due to run.
  • the processor may in certain implementations start to read from an undefined next task address. This will cause the hardware scheduler to insert a "no operation" (NOP) instruction into the processor pipeline until the last instruction of the task has passed through the last pipeline stage. At this point the processor is typically returned to an idle mode.
  • NOP no operation
  • the processor will be awoken by an "interrupt" type signal when a further task becomes ready for execution. This will be evoked by the scheduler. In a pipelined processor as shown in figure 5, this may involve clearing the first few pipeline stages. In other implementations, such as non-pipelined stages, the processor may start execution immediately.
  • the processor comprises a number of different stages; these are referred to as IF, ID, EX, MEM, WB.
  • the IF stage is an Instruction fetch, where the processor fetches the instruction to be executed.
  • the ID stage is an instruction decode, where the instruction is decoded.
  • the EX stage is an instruction execute, where the instruction is executed.
  • the MEM stage is a memory access for accessing memory.
  • the WB stage is a write back, where the result of the instruction is written to the register file.
  • a hardware scheduler is shown as 500 in figure 5.
  • the processor When the processor undergoes the wake up procedure, the processor starts executing instructions from a vector address provided in this case by the hardware scheduler 500. This vector address is the location of the first instruction of a task that is ready for execution. As the processor reaches the end of the currently executing task at the EX stage, it begins to execute the return-to-caller routine.
  • the return-to-caller instruction has the mnemonic "jr", takes a register number as the operand and causes an unconditional jump to the address in the register with that number.
  • jr the mnemonic
  • An MIPS is a
  • microprocessor without interlocked pipeline stages and an ISA is an instructions set architecture.
  • the return address for a function call is stored in register 31 and compilers generate the return-to-caller instruction as "jr$31 ".
  • the register number is fixed and the contents of this register can be used to distinguish between different return-to-caller requests; that is, whether the function making the request is a task or not, and so the return-to-caller instruction can be used to indicate the end of a task to hardware.
  • the general purpose register 31 is reset to the value OxFFFFFFFC when the processor is interrupted.
  • the processor sets the program counter to whatever value a "jr" instruction has read from its register as normal, unless that value is OxFFFFFFFC (a "task-jr"), in which case the program counter is instead set to the address of the next task.
  • the "task-jr" signals the "end task” signal to the dispatch component in the register modification stage.
  • Figure 6 illustrates in schematic form an example of the effect of overloading "jr" with the function of 'endtask'.
  • Figure 6 shows the steps carried out in the processor along with some machine code representations of tasks and actions that occur.
  • the steps include read instruction; decode; calculate; read data; and write data. These are equivalent to the stages in figure 5.
  • a task may include one or more parts, each of which are represented by a machine code instruction.
  • a machine code instruction "sw $4,8($1 )" 602 undergoes each of the processes within the processor pipeline. As time goes by the machine code instruction moves sequentially from one process to the next. At the calculate process (equivalent to the EX stage) the return to call function "jr" 604 is generated and forced to overload.
  • the "jr" instruction can generate an endtask only at the fifth stage (write data), since the instruction in its delay slot may generate an exception in the fourth stage ("sw $4,8($1 )" in this case).
  • Figure 7 is substantially equivalent to figure 6 in respect of the stages of the process and the time axis and these features will not be described again here.
  • Figures 6 and 7 both proceed to carry out the necessary functions on the machine code instruction 700 for task 1 .
  • the task progresses through each stage of the pipeline.
  • the return-to-caller function is generated and overloaded.
  • the endtask function caused by the overloaded return-to-caller function is activated immediately and fed back to the read instruction stage 702. This means that task 2 can now start.
  • task 1 continues to process the machine code instructions to the read data and write data process stages.
  • Figure 7 illustrates the effect of the endtask instruction on the run queues and instruction execution and shows an example of such an
  • end-of-task instruction has the mnemonic endtask and was inserted manually by way of a boilerplate low level wrapper, resulting in overhead extraneous to that of the endtask instruction. Avoiding the wrapper would require the entire task to be written at a low level.
  • the hardware scheduler starts task executions at the beginning of the tick with a wake up time in order to set the end-of-task identifier.
  • the wake up time is 3 CPU cycles.
  • the tasks within the tick period continue to execute back-to-back (with no overhead) until there are no more tasks pending and the processor is put into idle mode.
  • Figure 8 shows this.
  • Task 1 800 and task 2 802 complete within the tick period 804 and once completed the processor is in the idle mode 806.
  • WCET Worst Case Execution Times
  • the task guardian mechanism uses the end-of-task identifier to provide a means to instantly detect a timing error for specific tasks within a resolution of one or several CPU cycles. Whilst a watchdog timer, as described in the prior art, is intended to measure the responsiveness of the system as a whole, the task guardian mechanism measures task execution times against the WCET of each task.
  • the task guardian mechanism incorporates task specific information which relates more commonly with WCET figures used in system safety case documents. The task guardian will now be described in greater detail below.
  • the task guardian unit includes four separate task specific parameters as shown in table 1 below.
  • the guaranteed processor execution time is the time allowed for a task to execute before a task overrun error is signaled as described above with reference to figures 6 and 7 or potentially in any other way. It is intended that the GPT be set to the WCET of the specified task and be measured in CPU cycles.
  • the allowed overrun time is also measured in CPU cycles and has two functions depending on whether a backup task is provided or not.
  • the AOT becomes the WCET for the backup task. Should the backup task not complete before the end of the AOT then the backup task itself is shutdown. If a backup task has not been provided, then the AOT can become the additional execution time during which a task can run, as long as there are no pending tasks in the system. This is to prevent the extended execution time having an impact on the sequencing and progress of subsequent tasks.
  • AOT contains a value of zero, this will indicate that the original task is only allowed to execute to the end of the GPT period and that a backup task will not be run, even if one is provided.
  • the task guardian can also hold the backup task vector address to be executed when a task overrun is detected.
  • a task overrun counter is provided so that a recovery mechanism can use the count value to action different recovery strategies after any errors have been recorded.
  • a counter can be loaded with the GPT or AOT value for the task. As the task executes, this counter is decremented in line with the CPU frequency until either the task completes or the counter reaches zero.
  • the end-of-task identifier will signal the processor to immediately start execution of the next task.
  • the end-of-task identifier signal can also be used to load the counter with next GPT or AOT value pertaining to the next task.
  • the processor can be instructed, without delay, to immediately start loading the first instructions of the next task. In this situation, the task change is instructed by the zero count value rather than the end-of-task identifier. In some preferred implementations it may be beneficial to clear out any
  • the stack pointer may need to be restored to a safe base value.
  • the stack pointer can be set to the safe base value during the time when the end-of-task identifier is being signaled by the processor. If a backup task is included the vector address to the backup task may be provided to the first stage rather than the vector to the next task. An example of this is shown in Figure 9, with the bVector address 900 loading into the pipeline with a backup task to be performed.
  • the task guardian offers a convenient method and system of detecting the overrun of tasks in real-time. Having detected an error or failure in a task, the task can then be shut down quickly.
  • the task guardian includes various recovery techniques and enables backup tasks to be inserted into the flow when required.
  • the task guardian can also play an important role in maintaining operation of the TTC system should there be any failure, whatever the cause.
  • the task guardian can be used to identify errors.
  • the task guardian 1000 includes the GPT memory 1002, an AOT memory 1004, and a bvector memory 1006.
  • the task guardian includes a task guardian timer 1008 and a logic block 1010.
  • the task guardian is an optional module which may be included in the scheduler 500 of figure 5 or may even be used just with the processor to detect and identify errors.
  • a task ID 1012 is entered into the three memory blocks, GPT; AOT and bvector.
  • the task to execute 1014 enters the task guardian timer and proceeds to be processed by the processor.
  • the GPT and AOT provide their values for the time this task should take to the task guardian timer. If the task finishes within the allotted time the task is ended by stop 1 1016. If the task does not complete within the required time limits, a task overrun signal 1018 is generated. If the task overruns then the task that is pending is stopped by stop 2 1020 and the next task is started. In certain situations, instead of the next task commencing, the bvector memory will determine whether there are any backup tasks and either process those or shutdown the backup and return the processor to the following task or idle mode based on logic module 1010.
  • a task is shutdown in the scenario where a task runs up to its GPT and has not already completed.
  • An example of this is shown in Figure 1 1 , where task 1 1 100 has not completed before the end of the GPT time, therefore the task is shutdown and the next task is executed.
  • the shutdown procedure may incur some overhead whilst other implementations can maintain instantaneous task switching by preloading the instructions for the next task.
  • the hardware scheduler will execute the backup task only in the case where task 1 does not complete before the end of the GPT time. Normal back-to-back task execution continues after the backup task completes. This is shown in figure 12. Again, in some implementations there may be some overhead to shutdown task 1 .
  • Figure 13 is an illustration in which overhead is shutdown when switching in a backup task 1300.
  • the backup task itself may have problems. In order to protect the system from such an occurrence, the backup task is only allowed to execute up to the provided allowed overrun time (AOT). In the situation where the backup task does not complete before the end of the AOT time, the backup task itself is shutdown with a 3 CPU cycle overhead, after which normal task execution can continue. This is shown in figure 14.
  • AOT allowed overrun time
  • the allowed overrun time (AOT) value has another function.
  • a task may be allowed to exceed its GPT time and execute for the further time defined by the AOT value as long as there are no tasks pending.
  • a task will shutdown if the task does not complete before the end of the AOT as shown in figure 16.
  • An example flowchart of the task guardian mechanism and method steps is shown in Figure 17.
  • the process starts at step 1700.
  • the task guardian timer is started and the first task is executed at step 1702. If the GPT overflows at step 1704 the process proceeds in the direction of the arrow 1706. If the GPT does not overflow the process proceeds in the direction of arrow 1708.
  • a determination is made as to whether or not a task has ended at step 1710. If the task has not ended, the process returns to step 1704. If the task has ended a determination is made as to whether or not the input FIFO is empty at step 1712. If the FIFO is empty, the process is stopped at step 1714. If the input FIFO is not empty, the task ID for the next task will be selected at step 1716 and the process will return to the start and proceed with step 1702.
  • step 1718 determines if a backup task is available. If a backup task is available the process proceeds in direction 1720 and if the backup task is not available, the process continues in direction 1722. In the direction 1722, a determination is made as to whether an AOT exists at step 1724. If there is no AOT, the process proceeds directly to step 1712 as described above. If an AOT exists, determination as to whether the AOT has overflowed is made at step 1726. If this is the case, the flow proceeds directly to step 1712 as described above. If this is not the case, a determination is made as to whether or not an end task has been generated at step 1728.
  • step 1712 If an endtask has been generated, the process proceeds to step 1712 as above described. If there is no endtask, a determination is made as to whether the FIFO is empty at step 1730. If not, the process proceeds to step 1712 as above described. If the FIFO is empty, the process returns to step 1726.
  • step 1732 determination is made as to the task ID of the backup task in step 1732. Then processes 1724, 1726 and 1728 are repeated as described above. Once all the tasks in the input FIFO have been treated in this manner, the process stops and the processor becomes idle.
  • the end-of-task identifier may be used to implement a post execution idle slot. This can be
  • implementations involve encapsulating tasks into fixed timed durations by placing the processor into idle mode. Another implementation involves reducing jitter of subsequent tasks and reducing jitter when combined with a task guardian.
  • Figure 18 shows the periods of Task B (p?, p 2 , pe) is constantly changing because the execution time of Task A is changing. This can have a detrimental effect in hard real-time systems. In some systems it would be helpful to reduce the lowest task start time jitter as much as possible.
  • a post execution idle slot helps to reduce jitter by maintaining a fixed duration between tasks.
  • This fixed duration can be provided in the form of the WCET of a task as measured in CPU cycles.
  • a further benefit of this feature is to see how the system would react in the situation where each task executed at its WCET.
  • One preferred implementation might be to use the existing WCET task times held within the GPT and AOT values for each task.
  • Figure 19 illustrates in schematic form a normal execution with a post execution idle slot 1900.
  • a counter may be loaded with the GPT or AOT value for the task. These will be the length of the task plus the post execution idle slot. As the task executes, the counter is decremented in line with the CPU frequency. If an end-of-task identifier is signaled, the processor will be placed into idle mode until the counter reaches zero. For implementations such as superscalar or pipelined processors, the first few instructions of the next task may be preloaded into the processor before entering idle mode.
  • Two preferred implementations may include support for a task guardian strategy and use of a backup task.
  • a task does not complete before the end of its GPT time.
  • the task guardian mechanism the task can be shutdown. When the counter reaches zero the instructions for the next task can be loaded into the processor. Thus, the next task begins execution without any variation to its start time. This is shown in figure 20.
  • the backup task is inserted and includes a post execution idle slot defined by its AOT value. After the backup task finishes normal execution continues, as is shown in figure 21 .
  • a benefit to a hardware scheduler is that any overhead can be done without interruption to the processor. Therefore, it is possible to have a very high tick rate in the order of microseconds in which tasks are placed into a ready queue. The benefit of this is that the time frames in which tasks can be scheduled can be very accurate. This can help to provide spacing between tasks in order to reduce task start time jitter.

Abstract

A system for indentifying the end of a task running there on, wherein the system includes a scheduler for use with a time-triggered pipeline processor to control the passage of one or more tasks through one or more stages of the processor, the processor further being adapted to generate, overload and send a return-to-caller instruction to the scheduler when a first task reaches conclusion, wherein the scheduler receives the overloaded return-to-caller instruction to conclude that the first task in progress has ended and activates a second task to be read into the processor.

Description

A METHOD AND SYSTEM FOR IDENTIFYING THE END OF A TASK AND FOR NOTIFYING A HARDWARE SCHEDULER THEREOF
Description
Field of the invention
The present invention relates to a method and system for effectively identifying the end of a task, operating on a processor or the like. Background of the invention
Processors are used in all industry sectors to run processes, systems, operations, etc. Processors carry out a plurality of successive and/or simultaneous tasks.
Conventionally, processors are often formed on the basis of an event triggered architecture. A conventional microprocessor based system has a processor that includes a number of interrupt lines. These interrupt lines allow a process which is currently being executed on the processor to be interrupted by an event. As a result, a given process can be interrupted at any point in time. To comprehensively model a conventional "event triggered" system, the effect of every interrupt occurring at every point during every process must be tested. This is a huge number of possible variables even for simple systems. For complex systems, the modeling becomes very difficult and often is impractical due to the amount of processing required. As a result, the stability of such an event triggered system cannot be guaranteed in all situations. An event trigger system creates event trigger interrupts.
An alternative to "event triggered systems" is "time triggered systems". A "time-triggered" system (a TT system) is generally composed of a single periodic interrupt which is often driven by a timer. As the name suggests TT interrupts occur at known (predetermined) points in time. A "tick interval" is used to describe the duration between the predetermined points in time where one or more tasks may execute.
A TT system offers a higher degree of predictability compared to an event triggered system as it is possible to model the behavior of a TT system in virtually all relevant situations. The model of the TT system is a much easier process than that for an event triggered system, even for highly complex systems.
Due to the predictability of time triggered systems in terms of allowing each task to run-to-completion, these types of system are particularly useful in safety critical applications such as avionics. For example, with an event triggered avionics system, it would be hard to say with complete certainty that every set of inputs to the system would not result in a task being left uncompleted which could then give rise to an accident. On the other hand, a time triggered system can be modeled with closer or even complete certainty as it is known that there is generally enough
computational time for every task to complete and very little chance of an unforeseen condition creating an unexpected effect.
In a TT system, the role of the scheduler becomes very important as this is the entity that determines the order in which tasks run on the processor. Without a scheduler there would be no control over the progression and sequence of the tasks.
Many different ways have been proposed to implement an appropriate schedule in a TT system. Figure 1 shows a software scheduler or a realtime operating system (RTOS) solution that incorporates an inter-task overhead. The scheduler overhead periods are identified as 100 in figure 1 . Another proposed method relates to an end of task instruction. In order to identify the end of a run-to-completion task, it is necessary to place an instruction at the end of each task, known as the "end of task" instruction. The end of task instruction may be manually inserted or brought about by modifying a compiler. The end of task instruction is used to indicate that a task has completed and can help to achieve a reduction of inter-task overhead. A hardware or software scheduler may be used to achieve this. This unique instruction can also be known as an 'endtask' instruction and must be added to the processors instruction set architecture (ISA).
In another previous proposal an assembly wrapper may be required if a compiler is unable to automatically insert an endtask instruction. For example, an assembly wrapper for inserting an endtask instruction may be:
Task_ Wrapper 1:
Call Taskl
Endtask
The wrapper is necessary if the endtask instruction is to be inserted manually and/or if the task is to be written in a high-level language. Figure 2 illustrates a known example for using the endtask instruction 200 with a hardware scheduler 202.
In a further proposal utilizing a hardware scheduler, an implementation may begin a task execution by passing a task vector address along with a type of interrupt signal. The task then executes on the processor and when the unique endtask enters the processor the first instruction of the next task is immediately retrieved. It is not until the endtask instruction reaches a point within the processor where it 'cannot' generate 'any' further exceptions that the hardware scheduler is finally notified of the task change. In a pipeline processor as shown in figure 3, this may be at the end of the execution (EX) stage 300. The endtask instruction has essentially been manually inserted and is recognized at the EX stage. This method is a key requirement for systems that need to maintain precise exceptions as a task does not finish until it has made all its state changes. In summary, a pre-end-of-task is identified in the instruction fetch (IF) part 302 of the processor in order to begin loading the first instructions of the next task. The new task begins execution once the previous tasks instructions have finished making state changes in the processor, only then can the new task be considered as beginning execution. This is an example for maintaining precise exceptions in a pipelined processor.
An example of the endtask solution when using a processor that has three flushed pipeline stages is illustrated in Figure 4 showing overheads (endtasks) 400.
In many computing systems tasks may be given Worst Case Execution Times (WCET). This is important to both verify that the system is operating correctly and that processors along with the RTOS have enough available computational time to meet the demands for all the tasks in the system. For hard real-time systems, a task that exceeds the WCET is considered to be an error.
As a result, in a hard real-time system it is common to utilize a watchdog timer which will reset the processor if the system becomes unresponsive. The resolution of the watchdog timer is generally quite small and it can be hard to identify the reason why the system has become unresponsive. A software application is made up of a series of tasks or functions that are scheduled to run at specific points in time. In a co-operative scheduler, tasks are run to completion before the next task is executed. A problem arises when there are several tasks that need to run in a specific interval, and some or all of them have a variable execution time. This uncertainty in how long a task will take can lead to jitter.
In an earlier proposal, a software "sandwich delay" was proposed in which a timer is used to "compensate" if a task finished earlier than its WCET. An example implementation, which reduces jitter in a control system using sandwich delays, may be as follows:
void ISR(void) {
Start_Sandwich_Delay(A); // Set timer to match fn. duration
Function AQ;
Wait_For_Sandwich_Delay_To_Complete();
Start_Sandwich_Delay(B);
FunctionBQ;
Wait_For_Sandwich_Delay_To_Complete();
Start_Sandwich_Delay(C);
FunctionCQ;
Wait_For_Sandwich_Delay_To_Complete();
}
This technique is inaccurate and, at best, it may reduce some jitter, but does not completely eliminate it. Moreover, the sandwich delay has a high power consumption, and will not be suitable for low-power devices. This technique also relies on code instrumentation that not only increases the code and data sizes, but can potentially introduce software bugs into the system.
From the above, there are a number of fairly fundamental problems associated with operating a TT system and an associated scheduler. The present invention attempts to address at least some of the problems identified above by means of a novel and inventive scheme for operating and scheduling a TT system.
Objects of the invention
It is an object of the present invention to overcome at least some of the problems associated with the prior art.
Summary of the invention
The present invention provides a method and system as set out in the accompanying claims.
According to one aspect of the present invention there is provided a system for indentifying the end of a task running there on, wherein the system includes a scheduler for use with a time-triggered pipeline processor to control the passage of one or more tasks through one or more stages of the processor, the processor further being adapted to generate, overload and send a return-to-caller instruction to the scheduler when a first task reaches conclusion, wherein the scheduler receives the overloaded return-to-caller instruction to conclude that the first task in progress has ended and activates a second task to be read into the processor.
Optionally, the system further comprises the processor wherein the processor comprises a plurality of stages for carrying out different functions, the processor being adapted to pass each task sequentially from one stage to the next under the control of the scheduler;
Optionally, the processor comprises a fetch stage where the first instruction of a task is read; a decode state for decoding the first instruction; an execute stage which processes the decoded first instruction; and memory stage for accessing a memory; and a write stage where the processed first instruction is written to a registry file. Optionally, the return-to-caller instruction is created in a first stage of the processer and sent to the scheduler from a second later stage of the processor.
Optionally, the return -to-caller instruction is created in a first stage of the processer and sent to the scheduler from the same first stage of the processor.
Optionally, the return to caller instruction is created and sent before the first task has passed through all stages of the processor, the second task commences before the first task has passed through all stages of the processor and the two tasks run back to back.
Optionally, the scheduler further comprises a guardian mechanism for measuring the time taken for a task to be completed.
Optionally, the time taken for a task to complete is compared with a predetermined value.
Optionally, the guardian mechanism includes a plurality of memory stages and timer module, and wherein at least one of the plurality of memory modules generate said predetermined time for a certain task to be completed and communicate this with the timer to enable the timer to compare the time taken for a task to complete with the predetermined time.
Optionally, at least one of the memory stages is adapted to generate a backup task and if required, the timer selects the backup task to replace the currently running task.
Optionally, a post execution idle is introduced at the end of a completed task to ensure that all tasks are completed within the same time period.
According to a second aspect of the present invention there is provided a method for indentifying the end of a task running on a system, wherein the system includes a scheduler for use with a time-triggered cooperative pipeline processor to control the passage of one or more tasks through the processor, wherein the processor comprises a plurality of stages for carrying out different functions, the processor being adapted to pass each task sequentially from one stage to the next under the control of the scheduler; the processor further being adapted to generate, overload and send a return-to-caller instruction to the scheduler when a first task reaches conclusion, wherein the method comprises receiving via the scheduler the overloaded return -to-caller instruction to conclude that the first task in progress has ended and activating a second task to be read into the processor.
Optionally, the method comprises carrying out steps on the processor.
Optionally, the method further comprises via a processor: fetching the first instruction of a task is read; decoding the first instruction; processing the decoded first instruction; accessing a memory; and writing the processed first instruction to a register file.
Optionally, the method further comprises creating the return-to-caller instruction in a first stage of the processer and sending the return -to-caller instruction to the scheduler from a second later stage of the processor.
Optionally, the method further comprises creating the return-to-caller instruction in a first stage of the processer and sending the return -to-caller instruction to the scheduler from the same first stage of the processor.
Optionally, wherein creating and sending the return-to-caller instruction before the first task has passed through all stages of the processor, causes the second task to commence before the first task has passed through all stages of the processor thereby running the two tasks back to back.
Optionally, the method further comprises measuring the time taken for a task to be completed via a guardian mechanism. Optionally, the method further comprises comparing the time taken for a task to complete with a predetermined value.
Optionally, the guardian mechanism includes a plurality of memory stages and timer module, and wherein the method comprises generating said predetermined time for a certain task to be completed and communicating this with the timer, and wherein the method further comprises comparing the time taken for a task to complete with the predetermined time.
Optionally, the method further comprises generating a backup task via at least one of the memory stages and if required, selecting the backup task to replace the currently running task.
Optionally, the method further comprises introducing a post execution idle at the end of a completed task to ensure that all tasks are completed within the same time period.
According to a further aspect of the invention, there is provided a scheduler for operating in the system of the accompanying claims.
According to a still further aspect of the invention, there is provided a processor for operating in the system of the accompanying claims.
According to a further aspect of the invention, there is provided a guardian mechanism for operating in the system of the accompanying claims.
Brief description of the drawings
Reference will now be made, by way of example, to the accompanying drawings, in which:
Figure 1 is a diagram showing a schedule overhead, in accordance with the prior art;
Figure 2 is a diagram showing an example of using the end task instruction with a hardware schedule of the prior art; Figure 3 is a block diagram of a pipeline system for maintaining precise exceptions, in accordance with the prior art;
Figure 4 is a diagram showing a three pipeline scheme showing
overheads, in accordance with the prior art; Figure 5 is a diagram of a pipeline scheme, in accordance with an embodiment of the present invention;
Figure 6 is a diagram of the pipeline scheme showing the effect of overloading, in accordance with an embodiment of the present invention;
Figure 7 is a diagram illustrating the effect of an end task instruction, in accordance with an embodiment of the present invention;
Figure 8 is diagram showing back-to-back task execution, in accordance with an embodiment of the present invention;
Figure 9 is a diagram showing the execution of a backup task used in a task guardian, in accordance with an embodiment of the present invention; Figure 10 is a block diagram of a task guardian, in accordance with an embodiment of the present invention;
Figure 1 1 is a first timing graph, in accordance with an embodiment of the present invention;
Figure 12 is a second timing graph, in accordance with an embodiment of the invention;
Figure 13 is a third timing graph, in accordance with an embodiment of the present invention;
Figure 14 is a fourth timing graph, in accordance with an embodiment of the present invention; Figure 15 is a fifth timing graph, in accordance with an embodiment of the present invention; Figure 16 is a sixth timing graph, in accordance with an embodiment of the present invention;
Figure 17 is a flow chart of a task guardian mechanism, in accordance with an embodiment of the present invention; Figure 18 is a timing diagram showing the effects of jitter, in accordance with an embodiment of the present invention;
Figure 19 is a timing diagram showing the execution of a post-execution idle slot, in accordance with an embodiment of the present invention;
Figure 20 is a seventh timing graph, in accordance with an embodiment of the present invention
Figure 21 is an eighth timing graph, in accordance with an embodiment of the present invention;
Figure 22 is a ninth timing graph showing task shutdown, in accordance with an embodiment of the present invention; and Figure 23 is a tenth timing graph, in accordance with an embodiment of the present invention.
Detailed description of the preferred embodiments
The present invention relates to a design of a time-triggered hardware scheduler specifically for use with processors employing time-triggered software. One aim of the invention is to implement a means for a processor to reliably identify the end of a task. This is intended to improve the predictability of how the system will behave when executing a sequence of tasks.
The invention also serves to reduce the central processing unit (CPU) overheads, eliminate task jitter and increase task reliability, particularly in the event of an error. Identifying the end of a task in this way makes it possible to implement advance techniques such as back-to-back task execution, dealing with overrunning tasks and predictable post execution idle slots for low task jitter.
The present invention has the effect of eliminating task jitter and
increasing the reliability, predictability and performance for a time- triggered system.
The invention makes use of a concept of recognition that a "return-to- caller" instruction encountered during the execution of a task, in a TT system at least means that a task has been completed and control can be handed back to the scheduler for further action.
Such a message ensures that the scheduler does not assume that the task is going to take a fixed (worst case) time to execute and can thus schedule the "time-line" of tasks in a more flexible manner. This ensures that the scheduler optimizes the operation of the process and avoids unnecessary delays and/or long term idleness of the processor.
The invention in fact "overloads" the return to caller instruction and the end of a task is recognized without the need for manual modification of the software.
Identifying the end of a task in this way makes it easier to implement back- to-back scheduling with reduced task overhead (i.e. reduced redundant time between tasks).
The present invention is also useful for identifying errors. For example, if the return-to-caller instruction is not detected within a certain time period an error must have occurred. This is described in greater details below.
The ability to identify the end of a task in this way means the processor can be put into an idle mode thereby improving overall efficiency of the system. In a TT system, a number of tasks to be carried out by a processor are controlled by a scheduler. A key feature of run-to-completion tasks is that the compiler can be guaranteed to always use some form of return-to- caller instruction at all exit points of the task. A run-to-completion task is one which must be complete before another instance of that task can start. The return-to-caller instruction is always inserted by the compiler into the code relating to the task. The present invention makes use of the return- to-caller instruction in a new way. The overhead arising at the end of tasks in a hardware scheduler can be eliminated by overloading the return-to-caller instruction to mark the end of a task. This also eliminates the need for an end of task instruction and a low-level assembly wrapper.
The following methodology explains the implementation of the overloading of the return-to-caller instruction in order to indicate an end to a task.
In a time triggered cooperative (TTC) architecture, the processor is interrupted only in response to a scheduler "tick". At the point of the tick, the processor would normally be in an "idle" or sleep mode at the time of interruption. This is not essential.
When a task is started, the processor register which is used to store the return address from a function call is loaded. This register is also referred to as the "Return Address Register" or RAR. The RAR is given a code value which typically cannot represent a valid function address. The value is referred to herein as the "Task Return Value" (TRV). When the processor wakes from idle mode the RAR is set to be equal to the TRV.
On return of any function call, the processor will read the value stored in the RAR. If this value is not equal to the TRV, the return from the function will be processed as usual. However, if the value stored in the RAR is equal to the TRV, then the hardware scheduler is notified that a task has terminated. Operating in this way gives rise to a reduction in overhead, which can be particularly significant when the system is required to execute large numbers of short tasks, such as may take place in aerospace systems, for example.
The present invention makes it possible to eliminate inter-task overheads for run to completion tasks and provides an effective means for identifying the end of a task.
The automatic identification of the end of a task as described above can be used to execute back-to-back tasks when a hardware scheduler is operating. In addition, the invention ensures that it is possible to ascertain if a task has completed on time; and further to place the processor in idle mode as part of a jitter reduction mechanism.
As previously mentioned, systems employing a time-triggered co-operative (TTC) system architecture interrupt the processor in response to a scheduler tick. The processor is generally in an idle or sleep mode at the time of interruption, but this is not essential. After the tick has occurred, tasks are executed one after the other. This makes use of a unique method of identifying the end of a task to eliminate the overhead arising between the start or dispatch of each task in the hardware scheduler.
The unique method proceeds as follows. The processor will read the value stored in the RAR on return of any function call. If the RAR value is not equal to the TRV, the return from the function will be processed as usual, as before. However: [i] the value stored in the RAR is equal to the TRV, and [ii] there is another task due to run immediately, then the program counter will not be set to the value in the TRV. Instead, the program counter value is set or the processor begins execution from the starting address of the next task which is due to run.
Once the last task in the sequence has been executed, the processor may in certain implementations start to read from an undefined next task address. This will cause the hardware scheduler to insert a "no operation" (NOP) instruction into the processor pipeline until the last instruction of the task has passed through the last pipeline stage. At this point the processor is typically returned to an idle mode.
The processor will be awoken by an "interrupt" type signal when a further task becomes ready for execution. This will be evoked by the scheduler. In a pipelined processor as shown in figure 5, this may involve clearing the first few pipeline stages. In other implementations, such as non-pipelined stages, the processor may start execution immediately.
Referring to figure 5, a pipeline processor according to the present invention is shown. The processor comprises a number of different stages; these are referred to as IF, ID, EX, MEM, WB. The IF stage is an Instruction fetch, where the processor fetches the instruction to be executed. The ID stage is an instruction decode, where the instruction is decoded. The EX stage is an instruction execute, where the instruction is executed. The MEM stage is a memory access for accessing memory. The WB stage is a write back, where the result of the instruction is written to the register file. A hardware scheduler is shown as 500 in figure 5.
When the processor undergoes the wake up procedure, the processor starts executing instructions from a vector address provided in this case by the hardware scheduler 500. This vector address is the location of the first instruction of a task that is ready for execution. As the processor reaches the end of the currently executing task at the EX stage, it begins to execute the return-to-caller routine.
The final instructions attempt a return to the end-of-task identifier. Instead of returning to the arbitrary end-of-task identifier address, the processor immediately fetches the first instructions of the next task. It is then not until the last instruction of the previous task reaches a point within the processor where it 'cannot' generate 'any' further exceptions that the hardware scheduler is finally notified of the task change. In a pipelined processor this may be at the end of the execution stage. This method is a key requirement for systems that need to maintain precise exceptions as a task does not finish until it has made its state changes. If there are no further tasks ready to execute, the processor is then placed back into idle mode. Figure 5 discloses an example of maintaining precise expectations with end-of-task identifier in schematic form. The function of the code
"jr$31 " above the EX stage will be described in greater detail with respect to figures 6 and 7 below.
From figure 5, certain advantages of the invention can be identified. The idea of preloading the pipeline is the first step to achieve at least some of the advantages of the present invention. In addition, with this
implementation it is possible to maintain precise exceptions when switching tasks both in pipelined and superscalar processors.
Figures 6 and 7 will now be described in further detail to demonstrate how instructions working through the pipeline cause the jr$31 instruction to be overloaded in order to identify an end of task.
In an MIPS-I ISA, the return-to-caller instruction has the mnemonic "jr", takes a register number as the operand and causes an unconditional jump to the address in the register with that number. An MIPS is a
microprocessor without interlocked pipeline stages and an ISA is an instructions set architecture. Under MIPS conventions, the return address for a function call is stored in register 31 and compilers generate the return-to-caller instruction as "jr$31 ". The register number is fixed and the contents of this register can be used to distinguish between different return-to-caller requests; that is, whether the function making the request is a task or not, and so the return-to-caller instruction can be used to indicate the end of a task to hardware.
In one possible implementation, the general purpose register 31 is reset to the value OxFFFFFFFC when the processor is interrupted. The processor sets the program counter to whatever value a "jr" instruction has read from its register as normal, unless that value is OxFFFFFFFC (a "task-jr"), in which case the program counter is instead set to the address of the next task. The "task-jr" signals the "end task" signal to the dispatch component in the register modification stage. Figure 6 illustrates in schematic form an example of the effect of overloading "jr" with the function of 'endtask'.
Figure 6 shows the steps carried out in the processor along with some machine code representations of tasks and actions that occur. The steps include read instruction; decode; calculate; read data; and write data. These are equivalent to the stages in figure 5. Time increases in the direction of arrow 600. A task may include one or more parts, each of which are represented by a machine code instruction. A machine code instruction "sw $4,8($1 )" 602 undergoes each of the processes within the processor pipeline. As time goes by the machine code instruction moves sequentially from one process to the next. At the calculate process (equivalent to the EX stage) the return to call function "jr" 604 is generated and forced to overload. The process continues with the read data and write data processes with the machine instruction 602 continuing to progress through the pipeline but now associated with the return-to-caller function 604. At the end of the write data step, the return-to-caller function 604, which is overloaded, is returned to the scheduler to end the task. At this stage task 2 will commence and progress in a similar fashion as described above with respect to task 1 .
In this case, the "jr" instruction can generate an endtask only at the fifth stage (write data), since the instruction in its delay slot may generate an exception in the fourth stage ("sw $4,8($1 )" in this case).
The distinction between tasks, highlighted with gray borders in Figure 6, and true back-to-back execution can be observed when compared with Figure 7. Figure 7 is substantially equivalent to figure 6 in respect of the stages of the process and the time axis and these features will not be described again here. Figures 6 and 7 both proceed to carry out the necessary functions on the machine code instruction 700 for task 1 . As previously described with respect to figure 6 the task progresses through each stage of the pipeline. At the calculate process stage, the return-to-caller function is generated and overloaded. In this case, the endtask function caused by the overloaded return-to-caller function is activated immediately and fed back to the read instruction stage 702. This means that task 2 can now start. In the meantime task 1 continues to process the machine code instructions to the read data and write data process stages. As the end task is generated at stage three, this scheme truly allows back-to-back operation of tasks 1 and 2. Task 1 is being processed in stages four and five while task 2 is commencing in stages 1 to 3. It is clear from the representation in figure 7 that the tasks will be processed in a more speedy fashion than would previously have been the case.
Figure 7 illustrates the effect of the endtask instruction on the run queues and instruction execution and shows an example of such an
implementation under the MIPS-I instruction set architecture (ISA). In this case the end-of-task instruction has the mnemonic endtask and was inserted manually by way of a boilerplate low level wrapper, resulting in overhead extraneous to that of the endtask instruction. Avoiding the wrapper would require the entire task to be written at a low level.
In Figure 7, the first instruction of Task 2 is "lw $3,-4($28)" and the last instruction of the previous task is "sw $4,8($1 )". In addition, there are three instructions in between: endtask, a jump to the actual function and the no-operation instruction in the delay slot for this jump.
In normal execution the hardware scheduler starts task executions at the beginning of the tick with a wake up time in order to set the end-of-task identifier. In this example, the wake up time is 3 CPU cycles. The tasks within the tick period continue to execute back-to-back (with no overhead) until there are no more tasks pending and the processor is put into idle mode. Figure 8 shows this. Task 1 800 and task 2 802 complete within the tick period 804 and once completed the processor is in the idle mode 806.
It is possible to extend the teachings of the present invention to using the end of the task identifier to determine if a task has finished on time. The possible ways in which this can be implemented are, in general terms, long task execution; issuing a backup task; or a long backup task.
As previously indicated many computing systems tasks may be given Worst Case Execution Times (WCET), which is generally a predetermined value. This is important to both verify that the system is operating correctly and that processors along with the RTOS have enough available computational time to meet the demands for all the tasks in the system. For hard real-time systems, a task that exceeds its WCET is considered an error. The invention endeavors to immediately identify when a task has not completed when its WCET time has been reached by use of a so called task guardian. The task guardian is also capable of providing fast responding recovery mechanisms.
Using the end-of-task identifier, the task guardian mechanism provides a means to instantly detect a timing error for specific tasks within a resolution of one or several CPU cycles. Whilst a watchdog timer, as described in the prior art, is intended to measure the responsiveness of the system as a whole, the task guardian mechanism measures task execution times against the WCET of each task. The task guardian mechanism incorporates task specific information which relates more commonly with WCET figures used in system safety case documents. The task guardian will now be described in greater detail below. The task guardian unit includes four separate task specific parameters as shown in table 1 below.
Task
Description
Variable
Guaranteed Processor execution Time (CPU
GPT
cycles)
Allowed Backup/Overrun execution Time (CPU
AOT
cycles)
bVector Backup task address vector
tCount Task overrun count
Table 1 : Task guardian task parameters
The guaranteed processor execution time (GPT) is the time allowed for a task to execute before a task overrun error is signaled as described above with reference to figures 6 and 7 or potentially in any other way. It is intended that the GPT be set to the WCET of the specified task and be measured in CPU cycles.
The allowed overrun time (AOT) is also measured in CPU cycles and has two functions depending on whether a backup task is provided or not. In the scenario where a backup task is provided, the AOT becomes the WCET for the backup task. Should the backup task not complete before the end of the AOT then the backup task itself is shutdown. If a backup task has not been provided, then the AOT can become the additional execution time during which a task can run, as long as there are no pending tasks in the system. This is to prevent the extended execution time having an impact on the sequencing and progress of subsequent tasks.
If the AOT contains a value of zero, this will indicate that the original task is only allowed to execute to the end of the GPT period and that a backup task will not be run, even if one is provided.
The task guardian can also hold the backup task vector address to be executed when a task overrun is detected. A task overrun counter is provided so that a recovery mechanism can use the count value to action different recovery strategies after any errors have been recorded.
Depending on the mode of operation, when a task begins execution a counter can be loaded with the GPT or AOT value for the task. As the task executes, this counter is decremented in line with the CPU frequency until either the task completes or the counter reaches zero.
In the situation where the task completes early, the end-of-task identifier will signal the processor to immediately start execution of the next task. The end-of-task identifier signal can also be used to load the counter with next GPT or AOT value pertaining to the next task.
If the counter reaches zero and the task has not completed, the processor can be instructed, without delay, to immediately start loading the first instructions of the next task. In this situation, the task change is instructed by the zero count value rather than the end-of-task identifier. In some preferred implementations it may be beneficial to clear out any
uncompleted instructions within the processor between tasks.
In order to shutdown a task successfully the stack pointer may need to be restored to a safe base value. The stack pointer can be set to the safe base value during the time when the end-of-task identifier is being signaled by the processor. If a backup task is included the vector address to the backup task may be provided to the first stage rather than the vector to the next task. An example of this is shown in Figure 9, with the bVector address 900 loading into the pipeline with a backup task to be performed.
The task guardian offers a convenient method and system of detecting the overrun of tasks in real-time. Having detected an error or failure in a task, the task can then be shut down quickly. In addition, the task guardian includes various recovery techniques and enables backup tasks to be inserted into the flow when required.
The task guardian can also play an important role in maintaining operation of the TTC system should there be any failure, whatever the cause. In addition, the task guardian can be used to identify errors.
Referring now to figure 10 an illustration of a possible architecture for the task guardian is shown. The task guardian 1000 includes the GPT memory 1002, an AOT memory 1004, and a bvector memory 1006. In addition, the task guardian includes a task guardian timer 1008 and a logic block 1010. The task guardian is an optional module which may be included in the scheduler 500 of figure 5 or may even be used just with the processor to detect and identify errors.
In the first instance, a task ID 1012 is entered into the three memory blocks, GPT; AOT and bvector. The task to execute 1014 enters the task guardian timer and proceeds to be processed by the processor. The GPT and AOT provide their values for the time this task should take to the task guardian timer. If the task finishes within the allotted time the task is ended by stop 1 1016. If the task does not complete within the required time limits, a task overrun signal 1018 is generated. If the task overruns then the task that is pending is stopped by stop 2 1020 and the next task is started. In certain situations, instead of the next task commencing, the bvector memory will determine whether there are any backup tasks and either process those or shutdown the backup and return the processor to the following task or idle mode based on logic module 1010.
A task is shutdown in the scenario where a task runs up to its GPT and has not already completed. An example of this is shown in Figure 1 1 , where task 1 1 100 has not completed before the end of the GPT time, therefore the task is shutdown and the next task is executed. In some implementations the shutdown procedure may incur some overhead whilst other implementations can maintain instantaneous task switching by preloading the instructions for the next task.
If a backup task was provided, the hardware scheduler will execute the backup task only in the case where task 1 does not complete before the end of the GPT time. Normal back-to-back task execution continues after the backup task completes. This is shown in figure 12. Again, in some implementations there may be some overhead to shutdown task 1 .
Figure 13 is an illustration in which overhead is shutdown when switching in a backup task 1300.
It is possible the backup task itself may have problems. In order to protect the system from such an occurrence, the backup task is only allowed to execute up to the provided allowed overrun time (AOT). In the situation where the backup task does not complete before the end of the AOT time, the backup task itself is shutdown with a 3 CPU cycle overhead, after which normal task execution can continue. This is shown in figure 14.
As shown in figure 15, if a backup task is not provided, the allowed overrun time (AOT) value has another function. In this situation, a task may be allowed to exceed its GPT time and execute for the further time defined by the AOT value as long as there are no tasks pending.
A task will shutdown if the task does not complete before the end of the AOT as shown in figure 16. An example flowchart of the task guardian mechanism and method steps is shown in Figure 17.
The process starts at step 1700. The task guardian timer is started and the first task is executed at step 1702. If the GPT overflows at step 1704 the process proceeds in the direction of the arrow 1706. If the GPT does not overflow the process proceeds in the direction of arrow 1708. A determination is made as to whether or not a task has ended at step 1710. If the task has not ended, the process returns to step 1704. If the task has ended a determination is made as to whether or not the input FIFO is empty at step 1712. If the FIFO is empty, the process is stopped at step 1714. If the input FIFO is not empty, the task ID for the next task will be selected at step 1716 and the process will return to the start and proceed with step 1702. If at step 1704 the GPT has overflowed, a determination is made at step 1718 to determine if a backup task is available. If a backup task is available the process proceeds in direction 1720 and if the backup task is not available, the process continues in direction 1722. In the direction 1722, a determination is made as to whether an AOT exists at step 1724. If there is no AOT, the process proceeds directly to step 1712 as described above. If an AOT exists, determination as to whether the AOT has overflowed is made at step 1726. If this is the case, the flow proceeds directly to step 1712 as described above. If this is not the case, a determination is made as to whether or not an end task has been generated at step 1728. If an endtask has been generated, the process proceeds to step 1712 as above described. If there is no endtask, a determination is made as to whether the FIFO is empty at step 1730. If not, the process proceeds to step 1712 as above described. If the FIFO is empty, the process returns to step 1726.
Returning to step 1718 and continuing down the route 1720, a
determination is made as to the task ID of the backup task in step 1732. Then processes 1724, 1726 and 1728 are repeated as described above. Once all the tasks in the input FIFO have been treated in this manner, the process stops and the processor becomes idle.
In a further element of the present invention, the end-of-task identifier may be used to implement a post execution idle slot. This can be
accomplished by means of a number of implementations. One
implementation involves encapsulating tasks into fixed timed durations by placing the processor into idle mode. Another implementation involves reducing jitter of subsequent tasks and reducing jitter when combined with a task guardian.
Figure 18 shows the periods of Task B (p?, p2, pe) is constantly changing because the execution time of Task A is changing. This can have a detrimental effect in hard real-time systems. In some systems it would be helpful to reduce the lowest task start time jitter as much as possible.
A post execution idle slot helps to reduce jitter by maintaining a fixed duration between tasks. This fixed duration can be provided in the form of the WCET of a task as measured in CPU cycles. A further benefit of this feature is to see how the system would react in the situation where each task executed at its WCET.
One preferred implementation might be to use the existing WCET task times held within the GPT and AOT values for each task.
Figure 19 illustrates in schematic form a normal execution with a post execution idle slot 1900.
Depending on the mode of operation, when a task begins execution a counter may be loaded with the GPT or AOT value for the task. These will be the length of the task plus the post execution idle slot. As the task executes, the counter is decremented in line with the CPU frequency. If an end-of-task identifier is signaled, the processor will be placed into idle mode until the counter reaches zero. For implementations such as superscalar or pipelined processors, the first few instructions of the next task may be preloaded into the processor before entering idle mode.
Once the counter reaches zero, the processor is brought out of idle mode and the execution of the next task starts immediately.
Two preferred implementations may include support for a task guardian strategy and use of a backup task.
It may be possible that a task does not complete before the end of its GPT time. Using the task guardian mechanism, the task can be shutdown. When the counter reaches zero the instructions for the next task can be loaded into the processor. Thus, the next task begins execution without any variation to its start time. This is shown in figure 20.
If a task does not complete before the end of its GPT time and a backup task exists, the backup task is inserted and includes a post execution idle slot defined by its AOT value. After the backup task finishes normal execution continues, as is shown in figure 21 .
It should be noted that in some implementations there may be an overhead associated with loading the processor with a backup task rather than the instructions of the next task. It may also be preferable to always incur the same task shutdown overhead regardless of how long it takes for a task to shutdown or not in order to reduce start time jitter to the next task, regardless of whether or not the previous task was shutdown. This is shown in figure 22.
In the situation that the backup task does not complete before the end of the AOT time, the backup task is shutdown and normal execution continues, as is shown in figure 23
When using a hardware scheduler, the overhead to compute a task schedule is very small. Thus, a benefit to a hardware scheduler is that any overhead can be done without interruption to the processor. Therefore, it is possible to have a very high tick rate in the order of microseconds in which tasks are placed into a ready queue. The benefit of this is that the time frames in which tasks can be scheduled can be very accurate. This can help to provide spacing between tasks in order to reduce task start time jitter.
A person skilled in the art will understand that some or all of the functional entities as well as the processes themselves may be embodied in software, or one or more software-enabled modules and/or devices or in any combination thereof. The software may operate on any appropriate computer or other machine. The operation of the invention provides a number of transformations such as monitoring invalidation messages and re-sending, as necessary.
The system and method described above can be used in a multitude of products, including avionics systems and applications, and industrial on non industrial businesses and applications, medical environments and scientific environments, in general computers, phones, smart phones, etc. It will be appreciated that this invention may be varied in many different ways and still remain within the intended scope of the invention as defined in the claims.

Claims

Claims
1 . A system for indentifying the end of a task running there on, wherein the system includes a scheduler for use with a time-triggered pipeline processor to control the passage of one or more tasks through one or more stages of the processor, the processor further being adapted to generate, overload and send a return-to-caller instruction to the scheduler when a first task reaches conclusion, wherein the scheduler receives the overloaded return-to-caller instruction to conclude that the first task in progress has ended and activates a second task to be read into the processor.
2. The system of claim 1 , further comprising the processor wherein the processor comprises a plurality of stages for carrying out different functions, the processor being adapted to pass each task sequentially from one stage to the next under the control of the scheduler;.
3. The system of claim 1 or claim 2, wherein the processor comprises a fetch stage where the first instruction of a task is read; a decode state for decoding the first instruction; an execute stage which processes the decoded first instruction; and memory stage for accessing a memory; and a write stage where the processed first instruction is written to a register file.
4. The system of claim 3, wherein the return-to-caller instruction is created in a first stage of the processer and sent to the scheduler from a second later stage of the processor.
5. The system of claim 3, wherein the return-to-caller instruction is created in a first stage of the processer and sent to the scheduler from the same first stage of the processor.
6. The system of any preceding claim, wherein when the return to caller instruction is created and sent before the first task has passed through all stages of the processor, the second task commences before the first task has passed through all stages of the processor and the two tasks run back to back.
7. The system of any preceding claim, wherein the scheduler further comprises a guardian mechanism for measuring the time taken for a task to be completed.
8. The system of claim, 7 further comprising comparing the time taken for a task to complete with a predetermined value.
9. The system of claim 8, wherein the guardian mechanism includes a plurality of memory stages and timer module, and wherein at least one of the plurality of memory modules generate said predetermined time for a certain task to be completed and communicate this with the timer to enable the timer to compare the time taken for a task to complete with the predetermined time.
10. The system of claim 9, wherein at least one of the memory stages is adapted to generate a backup task and if required, the timer selects the backup task to replace the currently running task.
1 1 . The system of any of claims 7 to 10, further comprising introducing a post execution idle at the end of a completed task to ensure that all task are completed within the same time period.
12. A scheduler for use in a system of the type defined in any of claims
1 to 1 1 .
13. A processor for use in a system of the type defined in any of claims
2 to 1 1 .
14. A guardian mechanism for use in a system of the type defined in any of claims 7 to 1 1 .
15. A method for indentifying the end of a task running on a system, wherein the system includes a scheduler for use with a time-triggered cooperative pipeline processor to control the passage of one or more tasks through the processor, wherein the processor comprises a plurality of stages for carrying out different functions, the processor being adapted to pass each task sequentially from one stage to the next under the control of the scheduler; the processor further being adapted to generate, overload and send a return-to-caller instruction to the scheduler when a first task reaches conclusion, wherein the method comprises receiving via the scheduler the overloaded return-to-caller instruction to conclude that the first task in progress has ended and activating a second task to be read into the processor.
16. The method of claim 15, further comprising carrying out steps on the processor.
17. The method of claim 16, further comprising via a processor:
fetching the first instruction of a task is read; decoding the first instruction; processing the decoded first instruction; accessing a memory; and writing the processed first instruction to a register file.
18. The method of claim 17, further comprising creating the return-to- caller instruction in a first stage of the processer and sending the return-to- caller instruction to the scheduler from a second later stage of the processor.
19. The method of claim 17, further comprising creating the return-to- caller instruction in a first stage of the processer and sending the return-to- caller instruction to the scheduler from the same first stage of the processor.
20. The method of any of claims 15 to 19, wherein creating and sending the return-to-caller instruction before the first task has passed through all stages of the processor, causes the second task to commence before the first task has passed through all stages of the processor thereby running the two tasks back to back.
21 . The method of any of claims 15 to 20, further comprising measuring the time taken for a task to be completed via a guardian mechanism.
22. The method of claim, 21 further comprising comparing the time taken for a task to complete with a predetermined value.
23. The method of claim 22, wherein the guardian mechanism includes a plurality of memory stages and timer module, and wherein the method comprises generating said predetermined time for a certain task to be completed and communicating this with the timer, and wherein the method further comprises comparing the time taken for a task to complete with the predetermined time.
24. The method of claim 23, further comprising generating a backup task via at least one of the memory stages and if required, selecting the backup task to replace the currently running task.
25. The method of any of claims 15 to 24, further comprising
introducing a post execution idle at the end of a completed task to ensure that all task are completed within the same time period.
26. A computer program comprising instructions for carrying out the steps of the method according to any one of claims 15 to 25,
27. A product including a system in accordance with any of claims 1 to 1 1 .
PCT/GB2011/052302 2010-11-24 2011-11-23 A method and system for identifying the end of a task and for notifying a hardware scheduler thereof WO2012069830A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GBGB1019890.1A GB201019890D0 (en) 2010-11-24 2010-11-24 Asynchronnous and trasnparent three-buffer communication framework for distributed memory multi-cores
GB1019890.1 2010-11-24
GBGB1019895.0A GB201019895D0 (en) 2010-11-24 2010-11-24 Identifying the end of a task efficiency
GB1019895.0 2010-11-24

Publications (1)

Publication Number Publication Date
WO2012069830A1 true WO2012069830A1 (en) 2012-05-31

Family

ID=45478354

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/GB2011/052303 WO2012069831A1 (en) 2010-11-24 2011-11-23 Method and arrangement for a multi-core system
PCT/GB2011/052302 WO2012069830A1 (en) 2010-11-24 2011-11-23 A method and system for identifying the end of a task and for notifying a hardware scheduler thereof

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/GB2011/052303 WO2012069831A1 (en) 2010-11-24 2011-11-23 Method and arrangement for a multi-core system

Country Status (1)

Country Link
WO (2) WO2012069831A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984384A (en) * 2020-08-24 2020-11-24 北京思特奇信息技术股份有限公司 Daemon and timing job coexistence scheduling mechanism method and related device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432899B (en) * 2017-09-19 2022-04-15 Bae系统控制有限公司 System and method for managing multi-core access to shared ports
CN108829631A (en) * 2018-04-27 2018-11-16 江苏华存电子科技有限公司 A kind of approaches to IM promoting multi-core processor
CN111796948B (en) * 2020-07-02 2021-11-26 长视科技股份有限公司 Shared memory access method and device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212542B1 (en) * 1996-12-16 2001-04-03 International Business Machines Corporation Method and system for executing a program within a multiscalar processor by processing linked thread descriptors
FR2920557A1 (en) * 2007-12-21 2009-03-06 Thomson Licensing Sas Processor for CPU, has hardware sequencer managing running of tasks and providing instruction for giving control to sequencer at end of tasks, where instruction sets program with base address relative to next task to program counter

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6976155B2 (en) * 2001-06-12 2005-12-13 Intel Corporation Method and apparatus for communicating between processing entities in a multi-processor
ATE529808T1 (en) * 2007-02-07 2011-11-15 Bosch Gmbh Robert MANAGEMENT MODULE, MANUFACTURER AND CONSUMER COMPUTER, ARRANGEMENT THEREOF AND METHOD FOR COMMUNICATION BETWEEN COMPUTERS VIA SHARED MEMORY

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6212542B1 (en) * 1996-12-16 2001-04-03 International Business Machines Corporation Method and system for executing a program within a multiscalar processor by processing linked thread descriptors
FR2920557A1 (en) * 2007-12-21 2009-03-06 Thomson Licensing Sas Processor for CPU, has hardware sequencer managing running of tasks and providing instruction for giving control to sequencer at end of tasks, where instruction sets program with base address relative to next task to program counter

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BOLYCHEVSKY A ET AL: "Dynamic scheduling in RISC architectures", IEE PROCEEDINGS: COMPUTERS AND DIGITAL TECHNIQUES, IEE, GB, vol. 143, no. 5, 24 September 1996 (1996-09-24), pages 309 - 317, XP006006209, ISSN: 1350-2387, DOI: 10.1049/IP-CDT:19960788 *
CHARLES PRICE: "MIPS IV Instruction Set - Revision 3.2", 1 September 1995 (1995-09-01), XP055020870, Retrieved from the Internet <URL:http://www.weblearn.hs-bremen.de/risse/RST/docs/MIPS/mips-isa.pdf> [retrieved on 20120305] *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984384A (en) * 2020-08-24 2020-11-24 北京思特奇信息技术股份有限公司 Daemon and timing job coexistence scheduling mechanism method and related device
CN111984384B (en) * 2020-08-24 2024-01-05 北京思特奇信息技术股份有限公司 Daemon and timing type job coexistence scheduling mechanism method and related device

Also Published As

Publication number Publication date
WO2012069831A1 (en) 2012-05-31

Similar Documents

Publication Publication Date Title
JP5411587B2 (en) Multi-thread execution device and multi-thread execution method
Stewart et al. Mechanisms for detecting and handling timing errors
JP5611756B2 (en) Program flow control
Kreuzinger et al. Real-time event-handling and scheduling on a multithreaded Java microcontroller
US7395418B1 (en) Using a transactional execution mechanism to free up processor resources used by a busy-waiting thread
US7043729B2 (en) Reducing interrupt latency while polling
WO2012069830A1 (en) A method and system for identifying the end of a task and for notifying a hardware scheduler thereof
EP0482200B1 (en) Interrupt processing system
US20140089646A1 (en) Processor with interruptable instruction execution
US20080263552A1 (en) Multithread processor and method of synchronization operations among threads to be used in same
EP1853998A1 (en) Stop waiting for source operand when conditional instruction will not execute
US20050257224A1 (en) Processor with instruction-based interrupt handling
Hughes et al. Reducing the impact of task overruns in resource-constrained embedded systems in which a time-triggered software architecture is employed
US11635966B2 (en) Pausing execution of a first machine code instruction with injection of a second machine code instruction in a processor
US5761492A (en) Method and apparatus for uniform and efficient handling of multiple precise events in a processor by including event commands in the instruction set
JP2005521937A (en) Context switching method and apparatus in computer operating system
US20120204184A1 (en) Simulation apparatus, method, and computer-readable recording medium
US11847017B2 (en) Method for determining a reset cause of an embedded controller for a vehicle and an embedded controller for a vehicle to which the method is applied
JP2005100017A (en) Processor simulator, interruption delay count program and simulation method of processor
KR20180126518A (en) Vector instruction processing
Strnadel Statistical model checking of processor systems in various interrupt scenarios
Huang et al. A denotational model for interrupt-driven programs
Rusu-Banu et al. Formal Description Of Time Management In Real-Time Operating Systems
US9612834B2 (en) Processor with variable instruction atomicity
Starr et al. Model Execution Domain

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11808270

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11808270

Country of ref document: EP

Kind code of ref document: A1