WO2020005058A1 - Power interrupt immune software execution - Google Patents

Power interrupt immune software execution Download PDF

Info

Publication number
WO2020005058A1
WO2020005058A1 PCT/NL2019/050388 NL2019050388W WO2020005058A1 WO 2020005058 A1 WO2020005058 A1 WO 2020005058A1 NL 2019050388 W NL2019050388 W NL 2019050388W WO 2020005058 A1 WO2020005058 A1 WO 2020005058A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
event
tasks
run
state
Prior art date
Application number
PCT/NL2019/050388
Other languages
French (fr)
Inventor
Kasim Sinan Yildirim
Przemyslaw PAWELCZAK
Amjad Yousef MAJID
Original Assignee
Technische Universiteit Delft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technische Universiteit Delft filed Critical Technische Universiteit Delft
Publication of WO2020005058A1 publication Critical patent/WO2020005058A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • G06F9/4893Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues taking into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/486Scheduler internals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/543Local
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to a run-time system for executing an application on intermittently powered devices, such as found in sensor networks.
  • Wireless sensor network operating systems are known, e.g. TinyOS and Contiki, which are event-driven and dynamic, however not suited for application in devices having a high chance of power interruption.
  • Current approaches to be able to deal with power interruptions preserve forward progress using automatically inserted checkpoints, or require program refactoring into task based models.
  • These programming models are static, they cannot respond or adapt to a changing environment or hardware interrupts, and are polling-based, meaning they waste energy actively looking for changes, instead of passively waiting for an interrupt.
  • the present invention seeks to provide a run-time system for executing an application on a device, able to deal with the unpredictable nature of possible power interrupts.
  • Sensor networks are inherently event-driven, responding to changes in the environment, timer and hardware interrupts, and communications to determine the next task to complete, or the data to collect.
  • a task-based run-time system for executing an event-based application on an intermittently-powered embedded system, the intermittently-powered embedded system comprising a processor, a volatile (main) memory part, and a non-volatile (secondary) memory part, wherein the event-based application comprises idempotent tasks, task shared variables and event handlers.
  • the processor is arranged to allocate the task shared variables in the non-volatile memory part and, during execution of an idempotent task, use static versioning of the task shared variables with double buffering, execute the idempotent tasks in a run-to-completion manner, execute event handlers as a next-to-be- executed idempotent task during execution of an idempotent task.
  • a task-based run-time system is provided with a specific architecture arranged to execute event-based application under a specifically adapted run-time environment.
  • the present invention embodiments provide a better run-time performance, as well as less memory overhead.
  • Fig. 1 shows a schematic diagram of a generic device on which the present invention embodiments may be implemented
  • Fig. 2A-C show various exemplary task threads with idempotent tasks
  • Fig. 2D shows a timing diagram of energy state over execution time. Description of embodiments
  • the present invention embodiments relate to a task-based run-time system for executing an event-based application on an intermittently-powered embedded system.
  • Intermittently- powered embedded systems are e.g. found in sensor networks.
  • the sensors are not equipped with a power supply allowing continuous operation, e.g. without a battery, and rely on harvested energy.
  • Intermittently powered devices do not know if energy will be available in the (near) future, so it is customary in prior art type of intermittently powered devices that the device is arranged to greedily consume available energy at the cost of possibly missing events or data of interest to an application being executed within the device. These missed events are missed opportunities for higher quality sensing outcomes. It is desired by developers to have detailed control of their application despite the intermittency of power supply.
  • Fig. 1 shows a schematic diagram of an intermittently-powered embedded system wherein the present invention embodiments may be implemented.
  • the intermittently-powered embedded system 10 comprises a number of components, known as such to the person skilled in the art of embedded devices, including possible alternatives and variants of each component.
  • a processor 1 1 is provided (e.g. in the form of a micro-controller), interfacing with both a volatile memory part 12 and a non-volatile memory part 13.
  • the nonvolatile memory part 13 e.g. stores instructions allowing the processor 1 1 to execute an event- based application as defined in the present invention embodiments.
  • the intermittently-powered embedded system 10 comprises an energy harvesting/storage unit 14 which allows the device to operate without any fixed power supply or battery based power supply, and can take any form or implementation known in the art.
  • the processor 1 1 is interfacing with a sensor unit 15, which comprises one or more sensors, e.g. a temperature sensor, or an interface to one or more external sensors, and signal processing electronics such as analog-to- digital converters, etc.
  • the intermittently-powered embedded system 10 comprises an I/O unit 16, arranged to provide external communication, e.g. in the form of a wireless communication link such as Bluetooth.
  • the present invention embodiments allow execution of software on transiently- or intermittent- powered embedded devices 10, such as devices using energy harvesting.
  • An example is an RF-powered computer, capable of sensing, computing and communicating using harvested RF energy only.
  • a device 10 has a volatile memory part 12 and a non-volatile memory part 13.
  • the volatile state of the device 10 would be lost, i.e. the general and/or special purpose registers implemented in the volatile memory part 12, but the non-volatile state, i.e. the contents of the non-volatile memory part 13, e.g. implemented as FRAM, persists.
  • ISRs Interrupt service routines
  • ISRs might keep non-volatile memory inconsistent upon power failures and lead to data races when they interrupt the tasks.
  • a task- based runtime system for executing an event-based application on an intermittently-powered embedded system 10
  • the intermittently-powered embedded system 10 comprising a processor 1 1 , a volatile memory part (e.g. SRAM) 12, and a non-volatile memory part (e.g. FRAM) 13, wherein the event-based application comprises (idempotent) tasks, task shared variables and event handlers,
  • idempotent task is known to the person skilled in the art, as are the terms task shared variables and event handlers. Examples of actual implementations of this task-based runtime system, providing further details on a.o. these functions and terms, are given below.
  • InK Intermittently-Powered Kernel
  • InK is a reactive task-based run-time system suitable for applications with battery-less, energy harvesting sensors. InK eschews the static task execution model, and instead enables energy-adaptive, event-driven, and time sensitive applications for battery-less sensing devices 10.
  • FIG. 2A-2D An example application with multiple threads is shown in Fig. 2A-2D.
  • InK is an intermittent computing system that allows developers to create programs composed of multiple distinct prioritized task threads comprising e.g. sensing, computing and communication, to schedule periodic sensing tasks, respond to events in the environment, and adapt to changes in energy harvesting availability, all while managing intermittent power failures, memory consistency, and timekeeping duties behind the scenes with low overhead.
  • InK is generic, enabling reactive sensing applications despite intermittent failures, on a multitude of hardware devices 10.
  • Fig. 2A-2D show an exemplary embodiment of an application being executed on an intermittently-powered embedded system 10 that senses and sends data, based on available energy, thresholding, and a power failure resistant timekeeper. The simulated task execution trace is shown in Fig. 2D.
  • an event for the event-based application is one or more of the following event types: a detection of high energy availability; a hardware interrupt (e.g. sensor value threshold detection)); elapsed time (to allow e.g. scheduled tasks).
  • a hardware interrupt e.g. sensor value threshold detection
  • elapsed time to allow e.g. scheduled tasks.
  • Fig. 2A-2C the examples of three event types are shown.
  • Fig. 2A shows an energy(HIGH) event, which is triggered upon detecting a high amount of energy is available to the device 10, and an associated task thread (TH1 : Send) having a first task Transmit State’ (1) and a final task‘Sleep’ (7).
  • FIG. 2B shows a signal(Accel) event, which is triggered upon detecting a motion of the device 10 being above a pre-set threshold, and an associated task thread (TH2: Activity) having a first task‘Sense’ (2), a second task‘Features’ (3), a third task‘Classifiy’ (4), and a final task‘Sleep’ (7).
  • Fig. 2c shows a timer(5s) event, which is triggered by a timer signal event (timer lapsed), and an associated task thread (TH3: Temperature) having a first task‘Sample Temp’ (5), a second task‘EWMA (exponential weighted moving average calculation)’ (6), and a final task‘Sleep’ (7).
  • TH2 Activity
  • a timer signal event timer lapsed
  • TH3 Temperature
  • Fig. 2D provides a timing diagram of the detected voltage (e.g. of the energy harvesting system 14 of Fig. 1), providing a measure of available energy for the intermittently-powered embedded system 10.
  • the sequence of tasks executed by the processor 1 1 is indicated.
  • an event for the event-based application is a detection of crossing a turn-off threshold (of the voltage as shown in Fig. 2D or any measured quantity related to the available energy for the device 10), initiating a forced shut-down or power-up of the intermittently-powered embedded system 10.
  • the threshold value is indicated by the dashed horizontal line in Fig. 2D, and it is noted that the device 10 is not using any energy when the voltage is below the threshold.
  • the processor is executing the tasks of the task thread ready to be executed.
  • An energy threshold crossing is one type of event. Energy harvesting batteryless devices only store small amounts of energy and expend it quickly. The amount of energy available for any period is not constant; it changes based on the time of day (for example in outdoor solar environments), the weather, and the location (if mobile). Static task models are not robust to this energy irregularity; if a high energy radio broadcast is set to execute in a low energy situation, that task will never complete. Moreover, if a low energy reading on an accelerometer executes in high energy situation, excess energy is wasted. Current programming models do not associate tasks with their energy requirement, potentially exposing them to starvation.
  • a timer related event Scheduling events in the future is difficult with intermittently powered devices because maintaining time through power failures is not trivial.
  • an external device powered by a small capacitor can support timekeeping until the processor 1 1 turns back on.
  • Hardware Interrupt related events Nearly all sensing devices generate interrupts of some kind; sensors like accelerometers, gyros, and magnetometers can gather data without any involvement from the processor 1 1 , storing measurements in a buffer and then alerting the processor 1 1 via an interrupt pin when the buffer is full. Analog sensors may have thresholding circuitry that will wake the processor 1 1 when a point is reached. These hardware interrupts are not captured by current programming models, but are incredibly valuable to battery powered sensors for extending battery life; and will be valuable to battery-less sensors by increasing responsiveness of the processor 1 1 (by allowing the processor to sleep).
  • the event types are prioritized in a further embodiment. This allows to make sure the tasks associated with the highest priority event type are executed first, while energy is available in the intermittently-powered embedded system 10.
  • an event type is associated with a specific type of task thread, each task thread comprising a set of idempotent tasks. It is noted that each of the tasks (1)-(7) shown in Fig. 2A-2C are idempotent tasks (if programmed and compiled correctly, see below).
  • each specific type of task thread ends with a sleep task. This ensures that whenever possible, the intermittently-powered embedded system 10 is returning to a sleep mode, allowing harvesting of energy for subsequent tasks, of which the exact timing is unpredictable.
  • the above exemplary implementations then result in the sequence of (idempotent) tasks (1)-(7) as shown in the timing diagram of Fig. 2D, having periods of inactivity (blank), executing tasks (hatched areas) and sleeping periods (cross hatched areas).
  • a task thread as used herein is a lightweight and stack-less thread-like structure with a single entry point that encapsulates zero or more successive tasks. These tasks can do computation, sensing, or other actions, are idempotent, atomic, and have access to shared memory 12, 13. Each task thread has a unique priority and accomplishes a single objective— e.g. periodic sensing of accelerometer.
  • a kernel keeps track of each task thread by maintaining a task thread control block (TTCB) in the non-volatile memory part 13.
  • TTCB holds the state and the priority of the task thread, pointers to its entry task, to the next task in the control flow and to buffers in the non-volatile memory part 13 that holds task- shared variables.
  • a compiler is applied that instruments the event-based application source file by (i) allocating the task-shared variables in non-volatile memory 13 so that they become persistent, (ii) injecting the necessary dynamic versioning and atomic commit code to make tasks re-executable (idempotent), and (iii) translating control-flow declarations into specific InK calls (task transformation).
  • a power failure-immune non-pre-emptive scheduler is implemented that enables several (in)dependent threads/applications to run in an interleaved manner during intermittent execution.
  • the InK scheduler manages the system control- flow and micro-controller power management by considering not only requests from the tasks but also from the events triggered by hardware interrupts.
  • the InK kernel implements a task thread preemptive and static priority-based scheduling policy.
  • the InK scheduler always executes the next task in the control flow of the highest-priority task thread.
  • the pointer in the corresponding TTCB is updated so that it points to the next task in the control flow.
  • tasks run to completion and they can be pre-empted only by interrupts. Therefore, task thread preemption may only happen at tasks boundaries.
  • an ISR preempts the current task, it might activate other task threads of high priority that are waiting for the corresponding event. In this case, InK does not switch control to the higher priority task thread immediately; it waits for the atomic completion of the current task.
  • event-driven applications have been devised which include functionality of periodic sensing and distributed computation.
  • a serial communication service has been devised that runs despite power failures, thanks to the power failure-immune scheduling and interrupt management services provided by the InK kernel, as explained below.
  • the task of the programmer is to decompose the application into tasks that strictly should not exceed the capacity size of the hardware platform to preserve the progress of execution, specify task-shared variables and the control flow.
  • the InK runtime environment allocates task- shared variables in non-volatile memory 13; employs task-based control-flow, static versioning (double buffering) and atomic commit (two-phase commit) operations in order to guarantee the idempotency of the tasks— so that the execution restarts from the last (possibly partially) executed task without any side-effects.
  • the applied InK compiler provides lnK_task keyword to the C language in order to define tasks.
  • the InK run-time system keeps track of the current task in order to restart it after a power failure.
  • the task-shared variables are protected by the run-time system. All the task-shared variables are allocated in non- volatile memory 13 and statically versioned using double buffering. The state of the idempotent tasks and memory protection are internally maintained by the InK run-time system.
  • double-buffering is applied to preserve data consistency across power outages—namely an original buffer holding the original copies and a privatization buffer holding the task-local copies.
  • the TTCB of each task thread holds pointers to these buffers.
  • InK initializes the privatization buffer by copying the contents from the original buffer. Tasks can read/modify only the content in the privatization buffer.
  • the buffer pointers are swapped so that the outputs of the current task are committed atomically. Inter-thread communication may be facilitated through persistent pipes.
  • a pipe is a unidirectional buffer in the non-volatile memory part 13 with a timestamp.
  • Any task inside the producer task thread can write to a dedicated pipe so that any task in the consumer task thread can read and perform computation by considering the timeliness of the data. Since tasks cannot pre-empt each other and also pipes are unidirectional, pipe access will not lead to data races even upon power failures.
  • the control flow of the application is defined by a ‘signal’ keyword followed by an idempotent task identifier as well as the events generated by the interrupts.
  • the task signalling operation immediately returns without blocking the task execution; the InK scheduler executes the signalled task later.
  • the idempotent tasks run-to-completion and they cannot pre-empt each other but they can be pre-empted by hardware interrupts.
  • Hardware interrupts may register events to the system that will be handled by the event handlers.
  • Event handlers are also idempotent as tasks, and they cannot pre-empt other event handlers or tasks, or vice versa.
  • the keywords introduced by the InK run-time system are‘lnk_task’ (declaration of an idempotent task) and‘signal’ (signal activating a task). This complies with the features of the main embodiment of the present invention, i.e. the steps being executed by the processor 1 1 during run-time:
  • a state machine in order to ensure reactivity and adaptability, a state machine is implemented which maintains a scheduler-state variable in the non-volatile memory part 13 in order to ensure forward progress despite power failures.
  • the scheduler selects the task thread of highest priority and executes the next task in the control flow of the selected thread.
  • the scheduler (i) initializes the task privatization buffer via init; (ii) for the entry tasks, it locks the event data that triggered thread execution via lock_event (to eliminate data races between ISRs and tasks, (iii) it executes the task via run, (iv) for the entry tasks it releases the event via release, (v) it commits the tasks modifications by swapping buffer pointers, (vi) it suspends the thread if there are no dedicated events or remaining tasks. If there is no thread in ready state, the scheduler puts the processor 1 1 into low-power mode, saving energy and waiting for an interrupt for activation. The state machine enables progress of computation since it continues from the state it is interrupted.
  • Tasks inside task threads and ISRs can activate and deactivate other task threads and change control flow dynamically. Since the InK scheduler alternates between the aforementioned states, it can switch execution to the high-priority thread: first, the kernel awaits the completion of the interrupted current task inside the lower priority thread; then it starts executing the entry task of the high-priority thread.
  • Each task thread in InK has a dedicated non-volatile event queue that holds the events generated by ISRs. When any event is generated, the corresponding task thread is activated so that the thread execution will start from its entry task.
  • the event data is only accessible by the entry task of the task thread: entry task locks the event data to eliminate data races between ISRs and tasks. The entry task reads the event data and modifies necessary task-shared variables and then the event lock is released so that the event data will be removed from the event queue.
  • the pre-processing of an interrupt is performed by the corresponding ISR. Then, the rest of the computation is done by a task thread.
  • the corresponding ISR delivers the received or generated data to the upper layers of the system and notifies task thread.
  • Event queues are ring buffers dedicated for each task thread. They form an intermediate layer that prevents race conditions and preserves the event data consistency by eliminating ISRs from modifying task-shared data directly.
  • InK removes the event that has the oldest timestamp from the event-queue to increase the probability of having fresh data.
  • the task threads is notified by creating an event holding a pointer to the ISR data and its size, and a timestamp indicating the time at which interrupt is fired.
  • the corresponding task thread is notified by passing the pointer event structure so that the event will be placed in the event queue of the given task thread atomically.
  • run-to-completion semantics allow task-based concurrency where all tasks use the same stack without effecting the local variables of the other tasks, leading to less memory overhead.
  • a timer sub-system may be provided using an external persistent timekeeper that keeps track of time across power failures: (i) when the processor 1 1 is running, its internal timers are used to measure elapsed time; (ii) upon a power failure, external timekeeper keeps running and provides elapsed time until recovery.
  • the timer system implements a timer wheel algorithm to provide two types of timers for the task threads, i.e. expiration timers and one- shot/periodic timers.
  • Task threads may set expiration timers in order to enable timely execution of task threads and stop unnecessary and outdated computation if necessary.
  • data read from a sensor should be processed within a time constraint and if computation exceeds the required deadline the outputs of the computation is not useful any more.
  • an expiration timer fires, the corresponding task thread is evicted so that it does not consume systems resources, anymore.
  • One-shot and periodic timers may be used in order to schedule events in the future and generate periodic events, e.g. activating task thread at a given frequency. Since most of the sensing applications are periodic, these timers are the foundations of task threads that perform periodic sensing, these timers build on the persistent timekeeper to keep time across power failures.
  • This process is performed by the InK compiler which recognizes the task-shared variables and lnK_task declarations, provides necessary task management code in order to separate inputs of the task from its outputs, so that Write-After-Read dependencies do not create any side-effects and the task can be re-executed safely after each power failure by preserving its semantical correctness it is noted that the InK compiler hides all of these details from the programmer.
  • tasks have local and shared data.
  • the local data of tasks are allocated in volatile memory 12.
  • the shared data among the tasks are defined in the global scope and the InK compiler allocates them in non-volatile memory 13 by creating two persistent versions, i.e. static versioning via double buffering.
  • Access and modification of shared data is managed by the assistance of the InK compiler so that the shared variables are not corrupted due to data races among the tasks and power failures.
  • the InK compiler creates a local copy of each shared variable that a task accesses so that the task modifies only its local copies.
  • the local copies of the shared data are flushed to their original locations in non-volatile memory 13 atomically, using a commit operation of two phases, namely two-phase commit.
  • This is an improvement over prior art implementations, e.g. employing static multi-versioning via the concept of channels which has several disadvantages: (i) channels create a memory burden due to multi-versioning especially for large amount of task shared data; (ii) they cause run-time overhead when selecting the channel that contains the most recently modified task shared data and (iii) the programmer needs to marshal data through channels explicitly— too demanding when developing applications.
  • the task-based run-time system is arranged so that during execution of a task, task-shared variables are stored in the volatile memory part 12, and upon conclusion of a task, task-shared variables are copied to the non-volatile memory part 13.
  • copying task-shared variables to non-volatile memory part 13 may comprises a two- phase commit operation, a first phase being storing task-shared variables in a first part of double buffering static versioning and transitioning the task state to task done state, and a second phase being committing the task-shared variables in the first part of double buffering static versioning to the associated second part of double buffering static versioning and transitioning the task state to task finished state.
  • Versioning the task-shared variables together with the task state management allow a task re-execution without unplanned consequences.
  • the tasks in InK can be in three states: TASK_READY, TASK_DONE and TASK_FINISHED.
  • TASK_READY When a task is being executed, it is initially in TASK_READY state.
  • the detection of the task-shared variables that the task operates on is done automatically by the InK compiler.
  • Line 8 creates a local version of the shared value variable. After state checking (Lines 10-13), the value of the shared data value is read from its original location _per_vars[0]. value and copied to the local variable value (Line 15).
  • the task starts manipulating the local copies of the persistent task-shared variables.
  • the local copies of the task-shared data are flushed first to their temporary persistent locations in _per_vars[1 ] (Line 20) and the task transitions to the TASK_DONE state (Line 21)— phase 1 of the two-phase commit operation.
  • the modified values in _per_vars[1 ] are committed to their original locations _per_vars[0] which then holds the modified values of task- shared data that can be consumed by the next task (Lines 22-23)— phase 2 of the two-phase commit operation.
  • the task transitions to the TASK_FINISH state so that the scheduler can pick the next task and execute it.
  • the task-based run-time system is arranged to execute task state management by storing the task state in the non-volatile memory part 13 as one of task ready state, task done state, or task finished state.
  • Versioning and two-phase committing a large array may be wasteful at run-time when only an array slice is updated and this might create severe performance issues. As a programming practice, it is then advisable to avoid using large arrays in InK source files.
  • An efficient aspect of the present invention task-based run-time system is the power failure immune scheduler.
  • An InK program application
  • the InK scheduler manages the tasks and event handlers activated by the ISRs. Doing this services subject to power failures while addressing aforementioned problems P1-P5 is non-trivial, and explained below.
  • the processor 1 1 is further arranged to use a (persistent) scheduler to determine a control-flow of the event- based application by considering requests from an idempotent task and events from hardware interrupts.
  • the InK scheduler implements a non-pre-emptive FIFO task scheduling algorithm (chosen for the ease of implementation) in a power failure-immune manner.
  • Non-pre-emptive scheduling policy simplifies concurrency management among tasks: tasks cannot pre-empt each other and break the consistency of the shared variables (addressing P1 , P2 and P3).
  • An example of the main loop of the InK scheduler is presented in the following pseudo-code of an exemplary InK scheduler loop (Algorithm 1):
  • the scheduler implements an infinite loop that is composed of four states: READY, FINISHED, EVENT and LOOP.
  • the state of the scheduler is stored in the persistent variable scheduler_state (Line 1); its initial value is READY.
  • the persistent variable task_state (Line 2) holds the state of the current task being executed.
  • InK scheduler At each loop, InK scheduler first runs an event handler (if any), which has been activated by an ISR previously (Line 5), and then a task (if any) (Line 6). This scheduling policy is used since tasks are not blocked for a long time while running the event handlers. However, it might be desirable to run all event handlers first and then tasks thereafter. It is emphasized that event handlers, just like tasks, must be idempotent blocks that can be re-executed safely upon a power failure. If there is no event or task to be executed, the scheduler puts the microcontroller (processor 1 1 ) into low-power mode and waits for an interrupt to re-activate the scheduler loop (Line 8)— overcoming P5.
  • the InK scheduler manages tasks by using a queue as a fundamental data structure to hold the addresses of the tasks to be executed.
  • any modification to the system-wide shared data structures can be done by disabling interrupts so that race conditions are eliminated.
  • power failures cannot be“disabled” and might occur at any time. Therefore, attention should be paid to the queue operations since power losses during pop and push might corrupt the task queue, and in turn lead to crucial system faults.
  • the InK scheduler implements double-buffering by maintaining two persistent task queues: (i) main task queue main_queue and (ii) temporary task queue tmp_queue, both of fixed and same size.
  • Algorithm 2 as given below presents the operations on these queues handled by run_task() in Line 6 of Algorithm 1 :
  • the non-pre-emptive FIFO task scheduling algorithm is implemented using a main task queue (holding addresses of tasks to be executed) and a temporary task queue, both of fixed and same size and stored in the non-volatile memory part (13).
  • Tasks might signal other tasks—which is transformed to the lnK_scheduler_post_task call by the InK compiler. This call should push the signalled task on the main task queue so that they will be executed later. However, for the sake of task queue consistency, first the signalled task is pushed on the tmp_queue. If the execution of a task has not been finished yet and the scheduler has not transitioned to FINISHED state, main_queue will remain unchanged and a power-loss will lead to the rollback of the tmp_queue— thanks to the copy from main_queue operation at system reboot.
  • the corresponding ISR might require to deliver the received or generated data to the upper layers of the system.
  • the notification is done by creating an event holding a pointer to the ISR data and the event handler to be called.
  • the event is registered via register_event system call by indicating if the event is volatile or nonvolatile.
  • the InK scheduler maintains two extra queues for event registration: a persistent event queue per_queue (maintained in non-volatile memory 13)— so that the registered events are not lost upon a power failure, and a volatile event queue vol_queue (maintained in volatile memory 12)— so that a power failure leads to the loss of all registered events.
  • a persistent event queue per_queue maintained in non-volatile memory 13
  • vol_queue maintained in volatile memory 12
  • handle_event() If the scheduler state is READY and no active task are running, the scheduler runs handle_event() (Line 4). In the presently presented exemplary embodiemnt, handling events in per_queue has been given more priority. Therefore, handle_event() first pops an event from per_queue. If this queue is empty, an event from vol_queue is popped. It should be noted that these pop operations do not modify the contents of these queues unless they are committed by commit_event() function. The popped event is executed by calling the event handler by providing ISR data as an argument. Then the scheduler transitions to EVENT state so that the event handling will be committed. A power failure during these steps will not be fatal since event handlers are idempotent and event queues are not modified.
  • commit_eventO (Line 7). Since event handlers can also signal tasks using lnK_scheduler_post_task that changes the tmp_queue, these changes should also be committed to the main_queue, which is done by calling commitO (Line 8). Then, the scheduler transitions to READY state. The commit_eventO and commitO re-execution at a power loss will keep the system consistent since these functions are idempotent.
  • tmp_queue is rolled back to re-execute tasks since the task queue operations are performed firstly on this queue.
  • the task state is TASK_DONE or TASK_FINISHED
  • tmp_queue already holds the value that should be committed to the main_queue, i.e. rollback is to be avoided.
  • the main_queue is copied to tmp_queue if and only if the task state is TASK_READY and the scheduler is in READY state.

Abstract

A task-based run-time system for executing an event-based application on an intermittently- powered embedded system, with a processor (11), a volatile memory part (12), and a non-volatile memory part (13). The event-based application comprises (idempotent) tasks, task shared variables and event handlers. The processor (11) is arranged to allocate the task shared variables in the non-volatile memory part (13) and, during execution of a task, use static versioning of the task shared variables with double buffering, execute the tasks in a run-to-completion manner, and execute event handlers as a next-to-be-executed task during execution of a task.

Description

Power interrupt immune software execution
Field of the invention
The present invention relates to a run-time system for executing an application on intermittently powered devices, such as found in sensor networks.
Background art
Wireless sensor network operating systems are known, e.g. TinyOS and Contiki, which are event-driven and dynamic, however not suited for application in devices having a high chance of power interruption. Current approaches to be able to deal with power interruptions preserve forward progress using automatically inserted checkpoints, or require program refactoring into task based models. These programming models are static, they cannot respond or adapt to a changing environment or hardware interrupts, and are polling-based, meaning they waste energy actively looking for changes, instead of passively waiting for an interrupt.
Summary of the invention
The present invention seeks to provide a run-time system for executing an application on a device, able to deal with the unpredictable nature of possible power interrupts. Sensor networks are inherently event-driven, responding to changes in the environment, timer and hardware interrupts, and communications to determine the next task to complete, or the data to collect.
According to the present invention, a task-based run-time system is provided for executing an event-based application on an intermittently-powered embedded system, the intermittently-powered embedded system comprising a processor, a volatile (main) memory part, and a non-volatile (secondary) memory part, wherein the event-based application comprises idempotent tasks, task shared variables and event handlers. The processor is arranged to allocate the task shared variables in the non-volatile memory part and, during execution of an idempotent task, use static versioning of the task shared variables with double buffering, execute the idempotent tasks in a run-to-completion manner, execute event handlers as a next-to-be- executed idempotent task during execution of an idempotent task. In other words, a task-based run-time system is provided with a specific architecture arranged to execute event-based application under a specifically adapted run-time environment. Compared to state of the art embedded system kernels (such as Chain, see the article “Chain: Tasks and Channels for Reliable Intermittent Programs’ by Alexei Colin and Brandon Lucia, OOPSLA 2016) the present invention embodiments provide a better run-time performance, as well as less memory overhead.
Short description of drawings
The present invention will be discussed in more detail below, with reference to the attached drawings, in which
Fig. 1 shows a schematic diagram of a generic device on which the present invention embodiments may be implemented;
Fig. 2A-C show various exemplary task threads with idempotent tasks; and
Fig. 2D shows a timing diagram of energy state over execution time. Description of embodiments
The present invention embodiments relate to a task-based run-time system for executing an event-based application on an intermittently-powered embedded system. Intermittently- powered embedded systems are e.g. found in sensor networks. In such sensor networks the sensors are not equipped with a power supply allowing continuous operation, e.g. without a battery, and rely on harvested energy.
Present day wireless sensor network operating systems are mostly event-driven and dynamic, and it is noted that recent work in intermittent computing is neither event-driven nor dynamic. Current approaches preserve forward progress using automatically inserted checkpoints, or require program refactoring into task based models. These programming models are static, and cannot respond or adapt to a changing environment or hardware interrupts, and are polling-based. This has the result that they waste energy actively looking for changes, instead of passively waiting for an interrupt. Sensor networks are inherently event-driven; responding to changes in the environment, timer and hardware interrupts, and communications to determine the next task to complete, or the data to collect. Despite this insight, a reactive runtime and programming model for battery-less sensors and similar intermittently-powered devices that is able to execute relatively complex applications is still far away from realization.
Intermittently powered devices do not know if energy will be available in the (near) future, so it is customary in prior art type of intermittently powered devices that the device is arranged to greedily consume available energy at the cost of possibly missing events or data of interest to an application being executed within the device. These missed events are missed opportunities for higher quality sensing outcomes. It is desired by developers to have detailed control of their application despite the intermittency of power supply.
Bringing the event-driven paradigm to battery-less sensing devices should enable this control: State-of-the-art battery-less sensor device programming models fail to support event- driven applications since they mask relevant features of wireless sensor operation like external event handling, timekeeping, and energy management. Without these features, developers are unable to arrange the (embedded) application to schedule tasks or perform periodic sensing. Moreover, these models are rigid, and do not allow for in-situ adaptation based on changing energy availability or data. Event-driven sensing is the next step for battery-less operation in intermittently-powered embedded systems such as wireless sensors, but significant challenges exist for operation.
Fig. 1 shows a schematic diagram of an intermittently-powered embedded system wherein the present invention embodiments may be implemented. The intermittently-powered embedded system 10 comprises a number of components, known as such to the person skilled in the art of embedded devices, including possible alternatives and variants of each component. In the embodiment shown a processor 1 1 is provided (e.g. in the form of a micro-controller), interfacing with both a volatile memory part 12 and a non-volatile memory part 13. The nonvolatile memory part 13 e.g. stores instructions allowing the processor 1 1 to execute an event- based application as defined in the present invention embodiments. Furthermore, as shown, the intermittently-powered embedded system 10 comprises an energy harvesting/storage unit 14 which allows the device to operate without any fixed power supply or battery based power supply, and can take any form or implementation known in the art. Also, the processor 1 1 is interfacing with a sensor unit 15, which comprises one or more sensors, e.g. a temperature sensor, or an interface to one or more external sensors, and signal processing electronics such as analog-to- digital converters, etc. Finally, in the exemplary embodiment shown, the intermittently-powered embedded system 10 comprises an I/O unit 16, arranged to provide external communication, e.g. in the form of a wireless communication link such as Bluetooth.
The present invention embodiments allow execution of software on transiently- or intermittent- powered embedded devices 10, such as devices using energy harvesting. An example is an RF-powered computer, capable of sensing, computing and communicating using harvested RF energy only. As indicated with reference to Fig. 1 , in general, such a device 10 has a volatile memory part 12 and a non-volatile memory part 13. Upon power failure (which in an RF energy harvesting device 10 may occur at a rate of as high as ten times per second), the volatile state of the device 10 would be lost, i.e. the general and/or special purpose registers implemented in the volatile memory part 12, but the non-volatile state, i.e. the contents of the non-volatile memory part 13, e.g. implemented as FRAM, persists.
As a result, two major issues arise during intermittent execution:
P1 : The progress of computation might not be guaranteed due to frequent loss of the volatile state;
P2: The re-execution of the software application might lead to either semantically incorrect results due to the corrupted non-volatile state or to a system crash.
These problems may be partly overcome by solutions known in the art, such as checkpointing-based systems or task-based systems. However, especially the known task-based systems, relying suffer from the additional problems:
P3 Interrupt service routines (ISRs) cannot make explicit decisions on the control-flow due to the lack of an explicit task management and scheduler;
P4 Moreover, ISRs might keep non-volatile memory inconsistent upon power failures and lead to data races when they interrupt the tasks.
Finally, current task-based systems assume that the program is an endless loop which always keeps the micro-controller active— keeping interrupt-driven nature of systems out of the picture again. However, when the desired computation is finished, the micro-controller should transition to the low-power mode to save energy and wait for an external/internal event. Thus, the final problem can be stated as follows:
P5 The endless loop execution keeps micro-controller active, disabling transition to a low-power operation mode.
In the present invention embodiments, these problems are solved by implementing a task- based runtime system for executing an event-based application on an intermittently-powered embedded system 10, the intermittently-powered embedded system 10 comprising a processor 1 1 , a volatile memory part (e.g. SRAM) 12, and a non-volatile memory part (e.g. FRAM) 13, wherein the event-based application comprises (idempotent) tasks, task shared variables and event handlers,
and the task-based run-time system is arranged to:
allocate the task shared variables in the non-volatile memory part 13 and, during execution, use static versioning of the task shared variables with double buffering,
execute the (idempotent) tasks in a run-to-completion manner (i.e. tasks cannot pre-empt other tasks), and
execute event handlers as a next-to-be-executed task during execution of a task.
The terms idempotent task is known to the person skilled in the art, as are the terms task shared variables and event handlers. Examples of actual implementations of this task-based runtime system, providing further details on a.o. these functions and terms, are given below.
The present invention task-based runtime system embodiments utilize what is further below indicated as Intermittently-Powered Kernel (InK). InK is a reactive task-based run-time system suitable for applications with battery-less, energy harvesting sensors. InK eschews the static task execution model, and instead enables energy-adaptive, event-driven, and time sensitive applications for battery-less sensing devices 10.
An example application with multiple threads is shown in Fig. 2A-2D. These figures show how InK is an intermittent computing system that allows developers to create programs composed of multiple distinct prioritized task threads comprising e.g. sensing, computing and communication, to schedule periodic sensing tasks, respond to events in the environment, and adapt to changes in energy harvesting availability, all while managing intermittent power failures, memory consistency, and timekeeping duties behind the scenes with low overhead. InK is generic, enabling reactive sensing applications despite intermittent failures, on a multitude of hardware devices 10. Specifically, Fig. 2A-2D show an exemplary embodiment of an application being executed on an intermittently-powered embedded system 10 that senses and sends data, based on available energy, thresholding, and a power failure resistant timekeeper. The simulated task execution trace is shown in Fig. 2D.
In one embodiment, an event for the event-based application is one or more of the following event types: a detection of high energy availability; a hardware interrupt (e.g. sensor value threshold detection)); elapsed time (to allow e.g. scheduled tasks). On the left side of Fig. 2A-2C the examples of three event types are shown. Fig. 2A shows an energy(HIGH) event, which is triggered upon detecting a high amount of energy is available to the device 10, and an associated task thread (TH1 : Send) having a first task Transmit State’ (1) and a final task‘Sleep’ (7). Fig. 2B shows a signal(Accel) event, which is triggered upon detecting a motion of the device 10 being above a pre-set threshold, and an associated task thread (TH2: Activity) having a first task‘Sense’ (2), a second task‘Features’ (3), a third task‘Classifiy’ (4), and a final task‘Sleep’ (7). Fig. 2c shows a timer(5s) event, which is triggered by a timer signal event (timer lapsed), and an associated task thread (TH3: Temperature) having a first task‘Sample Temp’ (5), a second task‘EWMA (exponential weighted moving average calculation)’ (6), and a final task‘Sleep’ (7). Fig. 2D provides a timing diagram of the detected voltage (e.g. of the energy harvesting system 14 of Fig. 1), providing a measure of available energy for the intermittently-powered embedded system 10. In the timing diagram, the sequence of tasks executed by the processor 1 1 is indicated.
In a further embodiment, an event for the event-based application is a detection of crossing a turn-off threshold (of the voltage as shown in Fig. 2D or any measured quantity related to the available energy for the device 10), initiating a forced shut-down or power-up of the intermittently-powered embedded system 10. The threshold value is indicated by the dashed horizontal line in Fig. 2D, and it is noted that the device 10 is not using any energy when the voltage is below the threshold. When the voltage is above the turn-off threshold, the processor is executing the tasks of the task thread ready to be executed.
In other words, certain types of event must be handled differently in the context of intermittent computing, versus a traditional battery powered device. In the present invention embodiments, this is e.g. accomplished using three different types of events.
An energy threshold crossing is one type of event. Energy harvesting batteryless devices only store small amounts of energy and expend it quickly. The amount of energy available for any period is not constant; it changes based on the time of day (for example in outdoor solar environments), the weather, and the location (if mobile). Static task models are not robust to this energy irregularity; if a high energy radio broadcast is set to execute in a low energy situation, that task will never complete. Moreover, if a low energy reading on an accelerometer executes in high energy situation, excess energy is wasted. Current programming models do not associate tasks with their energy requirement, potentially exposing them to starvation.
A timer related event: Scheduling events in the future is difficult with intermittently powered devices because maintaining time through power failures is not trivial. In a further specific embodiment, when the processor 1 1 loses power, an external device powered by a small capacitor can support timekeeping until the processor 1 1 turns back on.
Hardware Interrupt related events: Nearly all sensing devices generate interrupts of some kind; sensors like accelerometers, gyros, and magnetometers can gather data without any involvement from the processor 1 1 , storing measurements in a buffer and then alerting the processor 1 1 via an interrupt pin when the buffer is full. Analog sensors may have thresholding circuitry that will wake the processor 1 1 when a point is reached. These hardware interrupts are not captured by current programming models, but are incredibly valuable to battery powered sensors for extending battery life; and will be valuable to battery-less sensors by increasing responsiveness of the processor 1 1 (by allowing the processor to sleep).
In the task-based run-time system the event types are prioritized in a further embodiment. This allows to make sure the tasks associated with the highest priority event type are executed first, while energy is available in the intermittently-powered embedded system 10.
In an even further group of embodiments, an event type is associated with a specific type of task thread, each task thread comprising a set of idempotent tasks.. It is noted that each of the tasks (1)-(7) shown in Fig. 2A-2C are idempotent tasks (if programmed and compiled correctly, see below).
Also shown in each of the diagrams of Fig. 2A-2C and the timing diagram of Fig. 2d, each specific type of task thread ends with a sleep task. This ensures that whenever possible, the intermittently-powered embedded system 10 is returning to a sleep mode, allowing harvesting of energy for subsequent tasks, of which the exact timing is unpredictable. The above exemplary implementations then result in the sequence of (idempotent) tasks (1)-(7) as shown in the timing diagram of Fig. 2D, having periods of inactivity (blank), executing tasks (hatched areas) and sleeping periods (cross hatched areas).
A task thread as used herein is a lightweight and stack-less thread-like structure with a single entry point that encapsulates zero or more successive tasks. These tasks can do computation, sensing, or other actions, are idempotent, atomic, and have access to shared memory 12, 13. Each task thread has a unique priority and accomplishes a single objective— e.g. periodic sensing of accelerometer. In order to preserve the progress and timeliness of computation despite power failures, In a present invention embodiment a kernel keeps track of each task thread by maintaining a task thread control block (TTCB) in the non-volatile memory part 13. TTCB holds the state and the priority of the task thread, pointers to its entry task, to the next task in the control flow and to buffers in the non-volatile memory part 13 that holds task- shared variables.
After performing tests and comparative test with other prior art task-based systems it was found that execution run-time is improved in general and a considerably lower memory overhead is used by the present invention embodiments.
Reverting to the problems P1 -P5 as described above, the present invention embodiments can address these problems in the following ways:
In order to overcome P2, a compiler is applied that instruments the event-based application source file by (i) allocating the task-shared variables in non-volatile memory 13 so that they become persistent, (ii) injecting the necessary dynamic versioning and atomic commit code to make tasks re-executable (idempotent), and (iii) translating control-flow declarations into specific InK calls (task transformation).
In order to overcome P1 , P3, P4 and P5, a power failure-immune non-pre-emptive scheduler is implemented that enables several (in)dependent threads/applications to run in an interleaved manner during intermittent execution. The InK scheduler manages the system control- flow and micro-controller power management by considering not only requests from the tasks but also from the events triggered by hardware interrupts.
In a variant, the InK kernel implements a task thread preemptive and static priority-based scheduling policy. The InK scheduler always executes the next task in the control flow of the highest-priority task thread. Upon successful completion of this task, the pointer in the corresponding TTCB is updated so that it points to the next task in the control flow. In InK, tasks run to completion and they can be pre-empted only by interrupts. Therefore, task thread preemption may only happen at tasks boundaries. When an ISR preempts the current task, it might activate other task threads of high priority that are waiting for the corresponding event. In this case, InK does not switch control to the higher priority task thread immediately; it waits for the atomic completion of the current task.
As examples, event-driven applications have been devised which include functionality of periodic sensing and distributed computation. A serial communication service has been devised that runs despite power failures, thanks to the power failure-immune scheduling and interrupt management services provided by the InK kernel, as explained below.
The following pseudo code presents a sample InK application source file which is composed of the tasks, the task-shared variables manipulated by the tasks and the event handlers:
1 uint8_t values[SIZE];
2 uint8_t i;
3 // sample temperature
4 lnK_task sense {
5 forfint i=o;i<N;i++)
6 values[i] = read_temp();
7
8 signal compute;
9 }
10 // average data and check
11 lnK_task compute {
12 if(average(values) > MAX){
13 signal sense_humidity;
14 }
15 signal actuate;
16 }
17 //perform actuation
18 lnk_task actuate {
19 led_toggle(LEDl);
20 }
21 //timer event
22 void timer_handler() {
23 // set timer again
24 set_timer(SECOND);
25 signal sense_temp;
26 }.
The task of the programmer is to decompose the application into tasks that strictly should not exceed the capacity size of the hardware platform to preserve the progress of execution, specify task-shared variables and the control flow. The InK runtime environment allocates task- shared variables in non-volatile memory 13; employs task-based control-flow, static versioning (double buffering) and atomic commit (two-phase commit) operations in order to guarantee the idempotency of the tasks— so that the execution restarts from the last (possibly partially) executed task without any side-effects. The applied InK compiler provides lnK_task keyword to the C language in order to define tasks.
The InK run-time system keeps track of the current task in order to restart it after a power failure. In order to ensure the consistency and idempotency of the task blocks, the task-shared variables are protected by the run-time system. All the task-shared variables are allocated in non- volatile memory 13 and statically versioned using double buffering. The state of the idempotent tasks and memory protection are internally maintained by the InK run-time system.
In a further embodiment, double-buffering is applied to preserve data consistency across power outages— namely an original buffer holding the original copies and a privatization buffer holding the task-local copies. The TTCB of each task thread holds pointers to these buffers. Before running any task, InK initializes the privatization buffer by copying the contents from the original buffer. Tasks can read/modify only the content in the privatization buffer. On a successful task completion, the buffer pointers are swapped so that the outputs of the current task are committed atomically. Inter-thread communication may be facilitated through persistent pipes. A pipe is a unidirectional buffer in the non-volatile memory part 13 with a timestamp. Any task inside the producer task thread can write to a dedicated pipe so that any task in the consumer task thread can read and perform computation by considering the timeliness of the data. Since tasks cannot pre-empt each other and also pipes are unidirectional, pipe access will not lead to data races even upon power failures.
The control flow of the application is defined by a ‘signal’ keyword followed by an idempotent task identifier as well as the events generated by the interrupts. The task signalling operation immediately returns without blocking the task execution; the InK scheduler executes the signalled task later. In InK, the idempotent tasks run-to-completion and they cannot pre-empt each other but they can be pre-empted by hardware interrupts. Hardware interrupts may register events to the system that will be handled by the event handlers. Event handlers are also idempotent as tasks, and they cannot pre-empt other event handlers or tasks, or vice versa. The keywords introduced by the InK run-time system are‘lnk_task’ (declaration of an idempotent task) and‘signal’ (signal activating a task). This complies with the features of the main embodiment of the present invention, i.e. the steps being executed by the processor 1 1 during run-time:
allocate the task shared variables in the non-volatile memory part and, during execution of an idempotent task, use static versioning of the task shared variables with double buffering, execute the (idempotent) tasks in a run-to-completion manner (note that tasks cannot pre-empt other tasks, but can be interrupted by an interrupt!), execute event handlers as a next-to-be-executed task during execution of a task.
In a further embodiment, in order to ensure reactivity and adaptability, a state machine is implemented which maintains a scheduler-state variable in the non-volatile memory part 13 in order to ensure forward progress despite power failures. At each loop iteration, the scheduler selects the task thread of highest priority and executes the next task in the control flow of the selected thread. During task execution, the scheduler (i) initializes the task privatization buffer via init; (ii) for the entry tasks, it locks the event data that triggered thread execution via lock_event (to eliminate data races between ISRs and tasks, (iii) it executes the task via run, (iv) for the entry tasks it releases the event via release, (v) it commits the tasks modifications by swapping buffer pointers, (vi) it suspends the thread if there are no dedicated events or remaining tasks. If there is no thread in ready state, the scheduler puts the processor 1 1 into low-power mode, saving energy and waiting for an interrupt for activation. The state machine enables progress of computation since it continues from the state it is interrupted. Tasks inside task threads and ISRs can activate and deactivate other task threads and change control flow dynamically. Since the InK scheduler alternates between the aforementioned states, it can switch execution to the high-priority thread: first, the kernel awaits the completion of the interrupted current task inside the lower priority thread; then it starts executing the entry task of the high-priority thread. Each task thread in InK has a dedicated non-volatile event queue that holds the events generated by ISRs. When any event is generated, the corresponding task thread is activated so that the thread execution will start from its entry task. In InK execution model, the event data is only accessible by the entry task of the task thread: entry task locks the event data to eliminate data races between ISRs and tasks. The entry task reads the event data and modifies necessary task-shared variables and then the event lock is released so that the event data will be removed from the event queue.
The pre-processing of an interrupt is performed by the corresponding ISR. Then, the rest of the computation is done by a task thread. When an interrupt is generated, the corresponding ISR delivers the received or generated data to the upper layers of the system and notifies task thread. Event queues are ring buffers dedicated for each task thread. They form an intermediate layer that prevents race conditions and preserves the event data consistency by eliminating ISRs from modifying task-shared data directly.
When the event queue is full InK removes the event that has the oldest timestamp from the event-queue to increase the probability of having fresh data. Once an interrupt is generated, the task threads is notified by creating an event holding a pointer to the ISR data and its size, and a timestamp indicating the time at which interrupt is fired. The corresponding task thread is notified by passing the pointer event structure so that the event will be placed in the event queue of the given task thread atomically.
A sample execution of the above presented application is now explained. Assume that a timer interrupt pre-empts the compute task being executed and registers a timer event to be handled by the event handler timer_handler (line 22). At that point, the control will be transferred to the interrupted compute task again that signals actuate task and might signal sense_humidity (line 13). Thereafter, the control will be transferred to the next task or event handler in the scheduler queue. Eventually, the timer_handler will be executed which will signal the sense_temp task (line 25) that sets the timer again. In that context we need to emphasize the following issue: The thread {timer - sense temperature -® compute average-®actuate} ends with the actuate task and it will be activated again by a timer interrupt. However, the sense_humidity task (line 13) will start another thread (not shown in the above program listing) that might be interrupted by the timer which will signal sense_temp task (line 25) again. At that point, the application will have two threads of execution: literally two task loops running in an interleaved manner. The scheduler will select tasks from each loop one by one and execute them according to the signalling order. Another issue is that both sense_temp and compute operate on the persistent value array— the run-time system ensures the consistency of each array element against power failures. Moreover, since these tasks do not pre-empt each other, the need for the complex concurrency handling is eliminated: run-to-completion semantics allow task-based concurrency where all tasks use the same stack without effecting the local variables of the other tasks, leading to less memory overhead.
In a further embodiment, a timer sub-system may be provided using an external persistent timekeeper that keeps track of time across power failures: (i) when the processor 1 1 is running, its internal timers are used to measure elapsed time; (ii) upon a power failure, external timekeeper keeps running and provides elapsed time until recovery. The timer system implements a timer wheel algorithm to provide two types of timers for the task threads, i.e. expiration timers and one- shot/periodic timers.
Task threads may set expiration timers in order to enable timely execution of task threads and stop unnecessary and outdated computation if necessary. As an example, data read from a sensor should be processed within a time constraint and if computation exceeds the required deadline the outputs of the computation is not useful any more. When an expiration timer fires, the corresponding task thread is evicted so that it does not consume systems resources, anymore.
One-shot and periodic timers may be used in order to schedule events in the future and generate periodic events, e.g. activating task thread at a given frequency. Since most of the sensing applications are periodic, these timers are the foundations of task threads that perform periodic sensing, these timers build on the persistent timekeeper to keep time across power failures.
Below a translation of a sample InK application source file into a C source file is given. 1 typedef struct {
2 uint8_t value;
3 }_per_vars_t;
4
5 _ nv _per_vars_t _per_vars[2];
6
7 void sample)) {
8 uint8_t value;
9
10 if (task_state == TASKJDONE)
11 goto flush_buffer;
12 else if (task_state == TASK_FINISHED)
13 return;
14
15 value = _per_vars[0]. alue;
16 {
17 value++;
18 lnK_scheduler_post_task(compute);
19 }
20 _per_vars[l].value = value;
21 task_state = TASK_DONE;
22 flush_buffer:
23 _per_vars[0], value = _per_vars[l].value;
24 task_state = TASK_FINISHED;
25 }
This process is performed by the InK compiler which recognizes the task-shared variables and lnK_task declarations, provides necessary task management code in order to separate inputs of the task from its outputs, so that Write-After-Read dependencies do not create any side-effects and the task can be re-executed safely after each power failure by preserving its semantical correctness it is noted that the InK compiler hides all of these details from the programmer.
In the InK environment, tasks have local and shared data. The local data of tasks are allocated in volatile memory 12. On the other hand, the shared data among the tasks are defined in the global scope and the InK compiler allocates them in non-volatile memory 13 by creating two persistent versions, i.e. static versioning via double buffering. Access and modification of shared data is managed by the assistance of the InK compiler so that the shared variables are not corrupted due to data races among the tasks and power failures. To this end, the InK compiler creates a local copy of each shared variable that a task accesses so that the task modifies only its local copies. At the completion of the task, the local copies of the shared data are flushed to their original locations in non-volatile memory 13 atomically, using a commit operation of two phases, namely two-phase commit. This is an improvement over prior art implementations, e.g. employing static multi-versioning via the concept of channels which has several disadvantages: (i) channels create a memory burden due to multi-versioning especially for large amount of task shared data; (ii) they cause run-time overhead when selecting the channel that contains the most recently modified task shared data and (iii) the programmer needs to marshal data through channels explicitly— too demanding when developing applications.
So in a further embodiment, the task-based run-time system is arranged so that during execution of a task, task-shared variables are stored in the volatile memory part 12, and upon conclusion of a task, task-shared variables are copied to the non-volatile memory part 13. Furthermore, copying task-shared variables to non-volatile memory part 13 may comprises a two- phase commit operation, a first phase being storing task-shared variables in a first part of double buffering static versioning and transitioning the task state to task done state, and a second phase being committing the task-shared variables in the first part of double buffering static versioning to the associated second part of double buffering static versioning and transitioning the task state to task finished state.
As to the task management and idempotency of tasks, the following remarks are made and explained by reference to the above given C-source file. Consider first a type declaration _per_vars_t which is a C structure holding the task-shared data value declared by the compiler (Lines 1-3). Then, an array _per_vars holding two structures of this type is created in non-volatile memory using the keyword _ nv (Line 5). For each lnK_task declaration, a corresponding function with the same name is created (Line 7) and each signal keyword is transformed into lnK_secheduler_post_task operating system call (Line 18), which instructs the scheduler to insert the given task into the task queue (described in Section 5).
Versioning the task-shared variables together with the task state management allow a task re-execution without unplanned consequences. The tasks in InK can be in three states: TASK_READY, TASK_DONE and TASK_FINISHED. When a task is being executed, it is initially in TASK_READY state. First the local copies of the task-shared variables— only the ones which the task operates on— are created. The detection of the task-shared variables that the task operates on is done automatically by the InK compiler. In the above example, Line 8 creates a local version of the shared value variable. After state checking (Lines 10-13), the value of the shared data value is read from its original location _per_vars[0]. value and copied to the local variable value (Line 15). At that point, the task starts manipulating the local copies of the persistent task-shared variables. After the task finishes its operation, the local copies of the task-shared data are flushed first to their temporary persistent locations in _per_vars[1 ] (Line 20) and the task transitions to the TASK_DONE state (Line 21)— phase 1 of the two-phase commit operation. Then, the modified values in _per_vars[1 ] are committed to their original locations _per_vars[0] which then holds the modified values of task- shared data that can be consumed by the next task (Lines 22-23)— phase 2 of the two-phase commit operation. Finally, the task transitions to the TASK_FINISH state so that the scheduler can pick the next task and execute it. If a power failure occurs before Line 21 , all the operations up to that point will be re-executed and they will produce the same results. On the other hand, if the power failure occurs between the Lines 21-24, the Lines 15-21 will not be re-executed thanks to the state check between Lines 10-13, since the local copies of the task-shared data are already committed to _per_vars[1 ]. In generic wording, the task-based run-time system is arranged to execute task state management by storing the task state in the non-volatile memory part 13 as one of task ready state, task done state, or task finished state.
Versioning and two-phase committing a large array may be wasteful at run-time when only an array slice is updated and this might create severe performance issues. As a programming practice, it is then advisable to avoid using large arrays in InK source files.
An efficient aspect of the present invention task-based run-time system is the power failure immune scheduler. An InK program (application) comprises tasks, interrupt service routines (ISRs) and event handlers. The InK scheduler manages the tasks and event handlers activated by the ISRs. Doing this services subject to power failures while addressing aforementioned problems P1-P5 is non-trivial, and explained below.
In the task-based run-time system of the present invention embodiments, the processor 1 1 is further arranged to use a (persistent) scheduler to determine a control-flow of the event- based application by considering requests from an idempotent task and events from hardware interrupts.
The InK scheduler implements a non-pre-emptive FIFO task scheduling algorithm (chosen for the ease of implementation) in a power failure-immune manner. Non-pre-emptive scheduling policy simplifies concurrency management among tasks: tasks cannot pre-empt each other and break the consistency of the shared variables (addressing P1 , P2 and P3). An example of the main loop of the InK scheduler is presented in the following pseudo-code of an exemplary InK scheduler loop (Algorithm 1):
1 scheduler_state <º {READY, FINISHED, EVENT, LOOP} ;
2 task_state eº {TASK_READY, TASK_DONE, TASK_FINISHED} ;
3 cur_task^NULL ; /* executed just once, at the initial boot */
4 while true do
5 run_event() ; /* execute an event */
6 run_task() ; /* execute a task */
7 if no task & no event then 8 suspend CPU ; /* low power mode */
The scheduler implements an infinite loop that is composed of four states: READY, FINISHED, EVENT and LOOP. The state of the scheduler is stored in the persistent variable scheduler_state (Line 1); its initial value is READY. Apart from the scheduler states, the persistent variable task_state (Line 2) holds the state of the current task being executed. These state tracking variables are maintained in order to recover from power failures, solving P1 and P2. In a specific embodiment, the scheduler implements a non-pre-emptive FIFO task scheduling algorithm.
At each loop, InK scheduler first runs an event handler (if any), which has been activated by an ISR previously (Line 5), and then a task (if any) (Line 6). This scheduling policy is used since tasks are not blocked for a long time while running the event handlers. However, it might be desirable to run all event handlers first and then tasks thereafter. It is emphasized that event handlers, just like tasks, must be idempotent blocks that can be re-executed safely upon a power failure. If there is no event or task to be executed, the scheduler puts the microcontroller (processor 1 1 ) into low-power mode and waits for an interrupt to re-activate the scheduler loop (Line 8)— overcoming P5.
The InK scheduler manages tasks by using a queue as a fundamental data structure to hold the addresses of the tasks to be executed. In principle, any modification to the system-wide shared data structures can be done by disabling interrupts so that race conditions are eliminated. However, power failures cannot be“disabled” and might occur at any time. Therefore, attention should be paid to the queue operations since power losses during pop and push might corrupt the task queue, and in turn lead to crucial system faults. In order to ensure power-failure immunity, the InK scheduler implements double-buffering by maintaining two persistent task queues: (i) main task queue main_queue and (ii) temporary task queue tmp_queue, both of fixed and same size. Algorithm 2 as given below presents the operations on these queues handled by run_task() in Line 6 of Algorithm 1 :
1 switch scheduler_state do
2 case READY do
3 if task_state=TASK_READY then
4 pop cur_task ; /* remove task from queue */
5 if cur_task & task_state,TASK_FINISHED then
6 call cur_task ; /* execute current task */
7 scheduler_state^FINISHED ;
8 case FINISHED do
9 if cur_task then
10 commitO ; /* commit to the main queue */
11 scheduler_state^LOOP ;
12 case LOOP do
13 scheduler_state^READY ;
14 task_state^TASK_READY ;
It is emphasized that at each system reboot, the main_queue is copied to tmp_queue if required. This enables the task queue operations only on the temporary task queue, making the main task queue unmodified/consistent during the task’s execution. In summary, in an embodiment of the present invention the non-pre-emptive FIFO task scheduling algorithm is implemented using a main task queue (holding addresses of tasks to be executed) and a temporary task queue, both of fixed and same size and stored in the non-volatile memory part (13).
Initially, the scheduler is in READY state and there is no actively running task, i.e. task_state is TASK_READY. Therefore, a task is removed from the temporary task queue (which is a copy of the main task queue) and the persistent variable cur_task is assigned the address of the task body (Line 4 of Algorithm 2). After popping a task from the temporary queue, the current task is executed (Line 6). It is worth to mention that task_state variable is modified by the task itself until it finishes its execution (refer to compiler generated code example above). After the task execution is finished, i.e. task_state=TASK_FINISHED, the scheduler transitions to the FINISHED state (Line 7). If the algorithm is interrupted due to a power failure before the task execution is finished, all of the previous steps can be re-executed safely since the main task queue is unmodified and the tasks are idempotent.
Tasks might signal other tasks— which is transformed to the lnK_scheduler_post_task call by the InK compiler. This call should push the signalled task on the main task queue so that they will be executed later. However, for the sake of task queue consistency, first the signalled task is pushed on the tmp_queue. If the execution of a task has not been finished yet and the scheduler has not transitioned to FINISHED state, main_queue will remain unchanged and a power-loss will lead to the rollback of the tmp_queue— thanks to the copy from main_queue operation at system reboot.
After the task execution is finished and the scheduler has transitioned to FINISHED state, the modifications of the temporary task queue, i.e. the first commit stage, should be reflected to the main task queue, so that the tasks that were signalled will be handled by the scheduler later. The commitO (Line 10) copies tmp_queue to main_queue— performs the second commit stage. Thanks to the two-phase commit, power failures at any stage will not corrupt the task queues. After the changes in the temporary task queue are committed to the main task queue, the scheduler is ready to execute the next task in the temporary task queue. Then, the scheduler transitions to the READY state and the task_state becomes TASK_READY (Lines 13 and 14).
In the InK environment, tasks cannot pre-empt each other due to the non pre-emptive scheduling policy, but they can be pre-empted by the hardware interrupts at any time. The preprocessing of an interrupt is performed by the corresponding ISR. The rest of the computation can be delivered to an event handler. Event handlers form an intermediate layer that prevents race conditions and preserves the persistent variables consistency by eliminating ISRs to modify task- shared data directly— overcoming P3 and P4. An event handler must be implemented as idempotent function that accepts event data as an argument. In InK’s current design, each ISR has a dedicated event handler, which is registered at run-time. Moreover, event handlers cannot pre-empt tasks and also cannot be pre-empted by other event handlers and tasks. When an interrupt is generated— no nested interrupts are allowed, the corresponding ISR might require to deliver the received or generated data to the upper layers of the system. The notification is done by creating an event holding a pointer to the ISR data and the event handler to be called. The event is registered via register_event system call by indicating if the event is volatile or nonvolatile.
In the InK environment a distinction is made between volatile, i.e. timely, events which are cleared upon power failures, e.g., incomplete data reception should be omitted. The InK scheduler maintains two extra queues for event registration: a persistent event queue per_queue (maintained in non-volatile memory 13)— so that the registered events are not lost upon a power failure, and a volatile event queue vol_queue (maintained in volatile memory 12)— so that a power failure leads to the loss of all registered events. This distinction is present since after a power failure it can still be useful for several applications to handle previously registered events in the per_queue. As an example, if an ISR successfully accumulated the bytes of a packet, registering packet reception event to the per_queue will allow the corresponding event handler, and in turn task, to further process the packet even after a power loss.
The pseudo-code of event handling, run_event(), (Line 5 of Algorithm 1) is given in the following (Algorithm 3):
1 switch scheduler_state do
2 case READY do
3 if task_state=TASK_READY then
4 handle_event() ; /* process an event */
5 scheduler_state^EVENT ;
6 case EVENT do
7 commit_event() ; /* commit to the event queue */
8 commit() ; /* commit signaled tasks */
9 scheduler_state^READY ;
If the scheduler state is READY and no active task are running, the scheduler runs handle_event() (Line 4). In the presently presented exemplary embodiemnt, handling events in per_queue has been given more priority. Therefore, handle_event() first pops an event from per_queue. If this queue is empty, an event from vol_queue is popped. It should be noted that these pop operations do not modify the contents of these queues unless they are committed by commit_event() function. The popped event is executed by calling the event handler by providing ISR data as an argument. Then the scheduler transitions to EVENT state so that the event handling will be committed. A power failure during these steps will not be fatal since event handlers are idempotent and event queues are not modified.
After the event execution, the pop operation on the persistent or volatile event queue is committed by commit_eventO (Line 7). Since event handlers can also signal tasks using lnK_scheduler_post_task that changes the tmp_queue, these changes should also be committed to the main_queue, which is done by calling commitO (Line 8). Then, the scheduler transitions to READY state. The commit_eventO and commitO re-execution at a power loss will keep the system consistent since these functions are idempotent.
At each reboot when using the present invention embodiments of the task-based run-time system, tmp_queue is rolled back to re-execute tasks since the task queue operations are performed firstly on this queue. However, if the task state is TASK_DONE or TASK_FINISHED, tmp_queue already holds the value that should be committed to the main_queue, i.e. rollback is to be avoided. Hence, upon recovery from each power interrupt, the main_queue is copied to tmp_queue if and only if the task state is TASK_READY and the scheduler is in READY state.
The present invention has been described above with reference to a number of exemplary embodiments as shown in the drawings. Modifications and alternative implementations of some parts or elements are possible, and are included in the scope of protection as defined in the appended claims.

Claims

Claims
1. A task-based run-time system for executing an event-based application on an intermittently-powered embedded system,
the intermittently-powered embedded system comprising a processor (11), a volatile (main) memory part (12), and a non-volatile (secondary) memory part (13),
wherein the event-based application comprises tasks, task shared variables and event handlers, and the processor (11) is arranged to:
allocate the task shared variables in the non-volatile memory part (13) and, during execution of a task, use static versioning of the task shared variables with double buffering,
execute the tasks in a run-to-completion manner,
execute event handlers as a next-to-be-executed task during execution of a task.
2. The task-based run-time system according to claim 1 , wherein an event for the event- based application is one or more of the following event types:
a detection of high energy availability; a hardware interrupt; elapsed time.
3. The task-based run-time system according to claim 1 or 2, wherein an event for the event- based application is a detection of crossing a turn-off threshold, initiating a forced shut-down or power-up of the intermittently-powered embedded system.
4. The task-based run-time system according to claim 2 or 3, wherein the event types are prioritized.
5. The task-based run-time system according to claim 2, 3 or 4, wherein an event type is associated with a specific type of task thread, each task thread comprising a set of idempotent tasks.
6. The task-based run-time system according to claim 5, wherein each specific type of task thread ends with a sleep task.
7 The task-based run-time system according to any one of claims 1-6, wherein the processor is further arranged to use a scheduler to determine a control-flow of the event-based application by considering requests from a task and events from hardware interrupts.
8. The task-based run-time system according to claim 7, wherein the scheduler implements a non-pre-emptive FIFO task scheduling algorithm.
9. The task-based run-time system according to claim 8, wherein the non-pre-emptive FIFO task scheduling algorithm is implemented using a main task queue and a temporary task queue, both of fixed and same size and stored in the non-volatile memory part (13).
10. The task-based run-time system according to any one of claims 1 -9, wherein the task- based run-time system is arranged to execute task state management by storing the task state in the non-volatile memory part (13) as one of task ready state, task done state, or task finished state.
1 1 . The task-based run-time system according to any one of claims 1 -10, wherein during execution of a task, task shared variables are stored in the volatile memory part (12), and upon conclusion of a task, task shared variables are copied to the non-volatile memory part (13).
12. The task-based run-time system according to claim 1 1 , wherein copying task-shared variables to non-volatile memory part (13) comprises a two-phase commit operation, a first phase being storing task shared variables in a first part of double buffering static versioning and transitioning the task state to task done state, and a second phase being committing the task- shared variables in the first part of double buffering static versioning to the associated second part of double buffering static versioning and transitioning the task state to task finished state.
PCT/NL2019/050388 2018-06-25 2019-06-25 Power interrupt immune software execution WO2020005058A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NL2021174 2018-06-25
NL2021174A NL2021174B1 (en) 2018-06-25 2018-06-25 Power interrupt immune software execution

Publications (1)

Publication Number Publication Date
WO2020005058A1 true WO2020005058A1 (en) 2020-01-02

Family

ID=62873563

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NL2019/050388 WO2020005058A1 (en) 2018-06-25 2019-06-25 Power interrupt immune software execution

Country Status (2)

Country Link
NL (1) NL2021174B1 (en)
WO (1) WO2020005058A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11597161B2 (en) 2017-07-10 2023-03-07 Dai-Ichi Dentsu Ltd. Fastening method and fastening apparatus

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ALEXEI COLINBRANDON LUCIA: "OOPSLA", 2016, article "Chain: Tasks and Channels for Reliable Intermittent Programs"
ANONYMOUS: "NVMW 18 | Non-Volatile Memories Workshop 2018 Program", 11 March 2018 (2018-03-11), XP055547857, Retrieved from the Internet <URL:http://nvmw.ucsd.edu/2018/program/> [retrieved on 20190128] *
BRANDON LUCIA ET AL: "A simpler, safer programming and execution model for intermittent systems", PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 3 June 2015 (2015-06-03), pages 575 - 585, XP058069658, ISBN: 978-1-4503-3468-6, DOI: 10.1145/2737924.2737978 *
GAUTIER BERTHOU ET AL: "Peripheral State Persistence and Interrupt Management For Transiently Powered Systems", NVMW 2018 - 9TH ANNUAL NON-VOLATILE MEMORIES WORKSHOP, MAR 2018, SAN DIEGO, UNITED STATES., 12 March 2018 (2018-03-12), University of California, San Diego, United States., pages 1 - 2, XP055547855 *
KASIM SINAN YILDIRIM ET AL: "InK", EMBEDDED NETWORKED SENSOR SYSTEMS, ACM, 2 PENN PLAZA, SUITE 701NEW YORKNY10121-0701USA, 4 November 2018 (2018-11-04), pages 41 - 53, XP058418732, ISBN: 978-1-4503-5952-8, DOI: 10.1145/3274783.3274837 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11597161B2 (en) 2017-07-10 2023-03-07 Dai-Ichi Dentsu Ltd. Fastening method and fastening apparatus

Also Published As

Publication number Publication date
NL2021174B1 (en) 2020-01-06

Similar Documents

Publication Publication Date Title
CN101833475B (en) Method and device for execution of instruction block
US6799236B1 (en) Methods and apparatus for executing code while avoiding interference
US7584332B2 (en) Computer systems with lightweight multi-threaded architectures
US8689215B2 (en) Structured exception handling for application-managed thread units
US8046758B2 (en) Adaptive spin-then-block mutual exclusion in multi-threaded processing
US6687903B1 (en) Inhibiting starvation in a multitasking operating system
US20110296148A1 (en) Transactional Memory System Supporting Unbroken Suspended Execution
US8516483B2 (en) Transparent support for operating system services for a sequestered sequencer
WO2019032980A1 (en) Fault detecting and fault tolerant multi-threaded processors
US20100083261A1 (en) Intelligent context migration for user mode scheduling
Keckler et al. Concurrent event handling through multithreading
Wang et al. Transaction-friendly condition variables
NL2021174B1 (en) Power interrupt immune software execution
Dudnik et al. Condition variables and transactional memory: Problem or opportunity
CN102117224A (en) Multi-core processor-oriented operating system noise control method
Podzimek Read-copy-update for opensolaris
Singh Design and Evaluation of an Embedded Real-time Micro-kernel
Ramadan et al. Metatm/txlinux: Transactional memory for an operating system
Lemerre et al. A model of parallel deterministic real-time computation
Silvestri Micro-Threading: Effective Management of Tasks in Parallel Applications
Dubrulle et al. A dedicated micro-kernel to combine real-time and stream applications on embedded manycores
Drescher et al. An experiment in wait-free synchronisation of priority-controlled simultaneous processes: Guarded sections
Dounaev Design and Implementation of Real-Time Operating System
Strøm Real-Time Synchronization on Multi-Core Processors
Nakamoto et al. Proposing software transactional memory for embedded systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19755977

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19755977

Country of ref document: EP

Kind code of ref document: A1