EP2917834A1 - Method for scheduling with deadline constraints, in particular in linux, carried out in user space - Google Patents

Method for scheduling with deadline constraints, in particular in linux, carried out in user space

Info

Publication number
EP2917834A1
EP2917834A1 EP13792761.2A EP13792761A EP2917834A1 EP 2917834 A1 EP2917834 A1 EP 2917834A1 EP 13792761 A EP13792761 A EP 13792761A EP 2917834 A1 EP2917834 A1 EP 2917834A1
Authority
EP
European Patent Office
Prior art keywords
task
cpu
state
scheduler
execution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP13792761.2A
Other languages
German (de)
French (fr)
Inventor
Sébastien BILAVARN
Muhammad Khurram BHATTI
Cécile BELLEUDY
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Centre National de la Recherche Scientifique CNRS
Original Assignee
Centre National de la Recherche Scientifique CNRS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Centre National de la Recherche Scientifique CNRS filed Critical Centre National de la Recherche Scientifique CNRS
Publication of EP2917834A1 publication Critical patent/EP2917834A1/en
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • G06F9/4887Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues involving deadlines, e.g. rate based, periodic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof

Definitions

  • the invention relates to a method for performing a single processor, multiprocessor scheduling or multi-task scheduling with time constraints.
  • Task Scheduler with Deadline Constraints means that each task has a completion time that must not exceed.
  • the scheduling performed supports execution on one or more processors. It is run under Linux, or any operating system that supports the POSIX standard, and in particular its POSIX.1c extension, but is not integrated with the kernel: it works in user space.
  • User space is a mode of operation for user applications, as opposed to kernel space which is a supervisory mode of operation that has advanced features with all rights, especially that of accessing all the resources of a microprocessor.
  • the manipulation of resources is nevertheless made possible in user space by so-called API ("Application Programming Interface") functions, which themselves rely on the use of Peripheral drivers.
  • API Application Programming Interface
  • Peripheral drivers The development of a solution in user space makes it possible and facilitates the development of schedulers with temporal constraints compared to a direct integration into the operating system kernel which is made very difficult by its complexity. Another advantage is to bring increased stability because a runtime error in user space does not have a serious consequence for the integrity of the rest of the system.
  • the method of the invention relies in particular on the mechanisms defined by the POSIX standard ("Portable Operating System Interface - X", ie "portable interface for the systems "X” expressing the Unix inheritance) and its POSIX.1c extension, including the POSIX task structure, POSIX threads ("pthreads") and POSIX task management APIs.
  • POSIX Portable Operating System Interface - X
  • pthreads POSIX threads
  • POSIX task management APIs POSIX task management APIs.
  • Linux is a system where time management is called “time share” as opposed to “real time” management.
  • time share as opposed to “real time” management.
  • several factors related to the operation of the system such as resource sharing, input / output management, interruptions, the influence of virtual memory, etc. cause temporal uncertainties about task execution times. Therefore, it is not possible to guarantee a hard real time solution, that is to say where the execution of each task is performed within strict time limits and inviolable.
  • the realization of the proposed scheduler is nevertheless suitable for real-time systems known as "soft” (in English “soft real time”), or "with constraints of expiry", which accept variations in the processing of data order from second to maximum. This is the case of a large number of real-time applications, for example multimedia.
  • auxiliary real-time kernel is a priority and deals directly with real-time tasks.
  • Other tasks are delegated to the standard Linux kernel which is considered a lower priority task (or job).
  • RT Linux, RTAI, XENOMAI real real-time scheduler
  • XENOMAI real real-time scheduler
  • This auxiliary real-time kernel is a priority and deals directly with real-time tasks.
  • Other tasks are delegated to the standard Linux kernel which is considered a lower priority task (or job).
  • the development of a specific kernel-specific scheduler is very difficult since it requires a thorough knowledge of the kernel and involves heavy kernel developments, with all the complexity and instability that entails.
  • Linux real-time extensions are supported by a smaller number of platforms.
  • a scheduling method according to the invention is particularly suitable for the implementation of low-consumption oriented scheduling policies, for example an EDF policy ("earliest deadline first", that is to say, priority over nearest deadline) - which is a special type of scheduling with time constraints - with dynamic voltage-frequency change (DVFS) to minimize consumption, which is important in embedded applications.
  • EDF policy earliest deadline first
  • DVFS dynamic voltage-frequency change
  • An object of the invention is therefore a task scheduling method with deadlines constraints, based on a model of independent periodic tasks and realized in user space, in which:
  • Each task to be scheduled is associated with a data structure, defined in user space and containing at least one temporal information (time and / or period) and information indicative of a state of activity of the task, said state of activity being selected from a list comprising at least:
  • each task modifies said information indicative of its state of activity and, if necessary, according to a predefined scheduling policy, calls a scheduler that is executed in user space;
  • Said scheduling policy may be a pre-emptive policy, for example chosen from a policy of the EDF type and its derivatives such as DSF ("Deterministic Stretch to Fit", that is to say “extension and deterministic adjustment of the frequency” ) and AsDPM ("Assertive Dynamic Power Management"), RM ("Rate Monotonic” or “Monotonic Rate Scheduling"), DM ⁇ "Deadiine Monotonic” or “Scheduling in maturity function "), LLF (“ Least Laxity First "or scheduling based on the lowest laxity).
  • the method can be implemented in a multiprocessor platform, said data structure also comprising information relating to a processor to which the corresponding task is assigned, and wherein said scheduler assigns to a system processor each task ready to be executed.
  • Said scheduler can change the clock frequency and the supply voltage of the or at least one processor according to a DVFS policy.
  • the method may include an initialization step, during which:
  • the tasks to be scheduled are created, allocated to the same processor and placed in a waiting state of a recovery condition, a global variable called an appointment being incremented or decremented during the creation of each said task;
  • said scheduler is executed for the first time.
  • Said data structure may also contain information indicative of a pthread associated with said task and its execution time in the worst case.
  • the method can be run under an operating system compatible with a POSIX.Ic standard, which can in particular be a Linux system. In that case :
  • the method may include using a MUTEX to perform a single instance of the scheduler at a time.
  • Assigning a task to a processor can be done using the CPU Affinity API.
  • Another object of the invention is a computer program product for implementing a scheduling method according to one of the preceding claims.
  • FIG. 1 the principle of implementing a scheduling method according to the invention in Linux user space
  • Figure 2 an example of a set of independent periodic tasks
  • FIG. 3 a diagram of the states and transitions of the application tasks in a scheduling method according to the invention.
  • FIG. 1 An application is seen as a set of tasks to be scheduled 2.
  • the scheduler 1 controls the execution of these tasks by means of three specific functions (reference 3): "prempt (Task)", “resume (Task)” and " run_on (Task, CPU) ".
  • These functions - whose names are arbitrary and given only as a non-limiting example - allow the preemption of a task, the recovery of a task and the allocation of a task to a processor, respectively. They rely on the use of features provided by the Linux API (reference 5) to control tasks and processors from the user space.
  • the scheduler For its operation, the scheduler must have specific information that derives from the task model used. For a model of periodic and independent tasks, it is at least information relating to the expiry of each task, its period and its state of activity; other temporal information can also be provided: Worst Case Execution Time (WCET), later deadline (that is, current time plus due date) to avoid confusion with "late maturity", “maturity” is sometimes referred to as “absolute maturity”), etc. In the case of a multiprocessor platform, the scheduler also requires information indicative of the processor to which the task is assigned. In addition, each task is associated with a POSIX thread ⁇ "pthread”), which must also be known to the scheduler.
  • POSIX thread ⁇ "pthread POSIX thread
  • UTEX associated with the task
  • condition associated with the task a condition associated with the task
  • Linux identifier of the task a condition associated with the task.
  • BCET best execution time
  • AET actual execution time
  • the application model consists of periodic and independent tasks.
  • a periodic task model refers to the specification of homogeneous sets of jobs ("jobs") that are repeated at periodic intervals; a "work” is a particular form of task, and more precisely an independent task whose execution is strictly independent of the results of the other "works”.
  • a model of independent tasks refers to a set of tasks for which the execution of one task is not subordinate to the situation of another. This model supports synchronous and asynchronous execution of tasks, it is applicable in a large number of real applications that must respect temporal constraints. Knowledge of the temporal characteristics and constraints of the tasks (due date, period, worst execution time, etc.) is therefore necessary to apply this type of scheduling. As explained above, this information is added to a specific structure that extends the standard task type under Linux.
  • FIG. 2 illustrates an application model of periodic and independent tasks comprising four tasks T1 - T4.
  • Tasks T1 and T2 must run before their deadline ("deadiine" DDLN) of 21 ms (milliseconds),
  • Each task executes in an effective time AET ("Actuai Execution Time” ⁇ f, which is between a minimum value BCET ("Best Case Execution Time”) and a maximum value WCET ("Worst Case Execution Time”).
  • AET Effective Time
  • BCET Best Case Execution Time
  • WCET Wide Case Execution Time
  • Figure 2 refers in particular to a video application where each of the four tasks corresponds to a particular treatment of an image. These four tasks are therefore repeated every 40 ms (period) in order to process 25 frames per second.
  • One or more queues are used to store tasks. At least one queue is required for the execution of a scheduling with deadlines, to memorize the list of ready tasks.
  • a priority is also assigned to each task to define their order of execution.
  • the priority criterion depends on the scheduling policy. For the EDF (Earliest Deadline First) policy, for example, the priority criterion is the proximity of the deadline: the task with the most expiry date. near is the highest priority.
  • the queue of ready tasks is generally sorted in descending order of priority.
  • the priority criterion considered here is different from the priority criteria used in native Linux schedulers (timesharing). The scheduler is called at specific times, called scheduling instants, which are triggered by the tasks themselves at characteristic moments of their execution.
  • task events (reference 2 in figure 1), they correspond for example to their activation (“onActivate”), their end of execution (“onBlock”), their reactivation (“onUnBlock”) or their termination (“onTerminate”) - these names being arbitrary and given only by way of non-limiting example.
  • the events are triggered, at the appropriate moments of the execution of an application task, by calls to functions "onActivate ()", “onBlock ()”, “onUnBlock ()”, “onTerminate ()", inserted in his code.
  • Each task event updates the fields of the task structure for the task concerned (state, future due date, period, etc.) and calls the scheduler if necessary.
  • the scheduler's non-scheduling call by a task event depends on the scheduling policy. For an EDF policy for example, the scheduler is called on the events "onActivate", “onBlock” and “onUnBlock”.
  • the event "onActivate” corresponds to the creation of a task. It marks the transition from “nonexistent” to “ready”. For reasons of synchronization when creating tasks (explained in 15.), the onActivate event increments, at the end of its execution, an "appointment” variable, then puts the task on hold for a condition of activation of the scheduler, for example by calling the "pthread_cond_wait” function of the "Pthread Condition Variable" of the POSiX.lc standard.
  • the onBlock event is triggered when a task completes its effective execution (AET). It then goes to the "waiting" state where elie waits until reaching its period. When it reaches its period, the task triggers the "onUnBlock” event, which makes it "ready”, and then puts it on hold for a recovery condition, for example by calling the POSIX function pthread_cond_wait. This condition will be signaled by the scheduler by means of the function "pthread_cond_broadcast” which also belongs to the "Pthread Condition Variable" of the POSiX.lc standard.
  • a task can be pre-empted by another task that has become more priority (in the case of an EDF algorithm, because its deadline has become closer to that of the task being executed). Preemption of a task changes it from "running" to "ready”. Conversely, the task that resumes following a preemption goes from the "ready" state to the "in execution" state.
  • the scheduler performs, by means of a main function "select_" (name given as a non-limiting example) the following actions: establishment of a queue of ready tasks, sorting of the queue priority in decreasing order of priority (highest priority task at the beginning of the list), preemption of non-priority tasks in progress, allocation of eligible tasks to free processors and start of eligible tasks by sending a recovery condition .
  • the determination of eligible tasks requires the scheduler to know the status of all existing tasks.
  • the action list given here corresponds to an EDF scheduling, but can be modified - and in particular enriched - for the realization of other scheduling policies.
  • Mutex MUTual Exclusion Device
  • the Mutex is locked at the beginning of the scheduler run, and released at the end of the scheduler call. This process ensures that only one instance of the scheduler executes at a time, allowing the shared variables to be changed without risk.
  • the first execution of the scheduler occurs just after the creation of the tasks. To ensure scheduler control of tasks, a synchronization must be performed to ensure that all tasks have finished being created before running the scheduler and that tasks do not start running until the job is completed. the scheduler did not give them the order. For this, first of all, a global variable of is initialized to 0 before creating jobs. Then all the tasks are created and placed on the first processor of the system. At the very beginning of their creation, in other words when the onActivate () event is executed, each task increments the "rendezvous" variable and immediately suspends its execution by waiting for a restart condition ( use of the POSIX function "pthread_cond_wait").
  • the scheduler can be executed. During this execution, the scheduler resumes the execution of the eligible tasks by informing them of their recovery condition (function "pthread_cond_broadcast”).
  • the scheduler To control the execution of the application tasks, the scheduler requires the use of two specific functions for the suspension and the resumption of a task, called respectively "preempt (Task)” and “resume (Task)” in figure 1 (reference 3), these names being given solely by way of non-limiting example.
  • the preempt function (Task) is based on the use of ⁇ "Signa! Of the POSIX standard.
  • the signal S1GUSR1 is sent to the corresponding pthread (POSIX function "pthread_kill”).
  • the associated signal handler (“sigusrl") puts the job on hold for a resume condition ("pthread " cond_wait ”) upon receipt of the signal.
  • the resume function (Task) reports the proper resume condition to the task in question to resume execution. It is therefore strictly equivalent to calling the Linux function pthread_cond_broadcast; in fact, the creation of a specific function is essentially justified for the sake of legibility of the code. This mechanism exploits the fact that in practice, the signal handler SIGUSR1 is executed by the Linux "pthread" which receives the signal.
  • the signai manager can identify the "pthread” to suspend (necessary to send the waiting condition to the "pthread” concerned by the function pthread_cond_wait) by comparing for example its own process identifier (tid) with that of all "Pthreads" of the application.
  • the scheduler does not use the preemption and job recovery mechanisms provided by the kernel because they are not accessible in user space.
  • the scheduler requires an explicit function to fix the execution of a task on a given processor.
  • there is an API for controlling Linux "pthreads" by the CPUs of a CPU Affinity there is no specific function for allocating a task to a processor.
  • a specific function "change_freq” (name given only as a non-limiting example) is used in the case of a scheduler using DVFS techniques to control the dynamic change of voltage / frequency of the processors ( Figure 1, reference 3).
  • This function uses a Linux APi called “CPUFreq” which allows to change the frequency of each processor via the virtual file system "sysfs". For example, simply write the desired frequency to a file - the file "/ sys / devices / system / cpu / cpuO / cpufreq / scaling_setspeed" on Linux - to change the frequency of processor 0.
  • the scheduler can use the function change_freq to allow the modification of the frequency by writing back to the file system.
  • the scheduler of the invention has been applied to the realization of a "DSF"("Deterministic Stretch ⁇ to-Fit” type scheduling, in which the frequency of the processors (and therefore the voltage, which depends on it) ) is recalculated at each scheduling event, using Actual Execution Time (AET) for completed tasks and Worst Execution Time (WCET) for others. , to allocate the beat time of the previous task to the next task, which reduces the operating frequency of the processor concerned.
  • DSF Deterministic Stretch ⁇ to-Fit
  • AET Actual Execution Time
  • WCET Worst Execution Time
  • the appendix contains the source code, written in C language, of a computer program for implementing a scheduling method according to the invention, using the DSF-DVFS technique described above. This code is given only by way of non-limiting example.
  • the code consists of a plurality of files contained in two directories: DSF_Scheduler and pm_drivers.
  • the DSF_Scheduler directory contains the core of the DSF scheduler code. It includes, in particular, the N_scheduler prototype file, h which contains the definition of the necessary types (task structure, processor structure, etc.), the global variables (task list, processor list, scheduler parameters, etc.), and your prototypes of the exported functions.
  • the DSF_Scheduler.c file then describes the main functions of the scheduler. The most important is the select_ function, which describes the entire scheduling process performed at each scheduling time. OnActivate, onBlock, onUnBlock, and onTerminate task events that trigger scheduling times are also described in this file. The scheduling function relies on a slow_down function that calculates and applies the frequency change. Finally, the particularities of the scheduler at the start of the application required the development of a specific function start_sched. This one does not run only once at the very beginning of the launch of the application and is largely based on the code of the main scheduler (select_). For all other scheduling times, the select_ function is called via the call_scheduter function.
  • the Makefile file is used for compilation,
  • N_application.c. It is also the file containing the "hand" of the program. This one carries out the necessary initializations, creates the POSIX tasks, launches the scheduler and synchronizes the whole thing. It should be noted here that an application is viewed as a set of tasks, each with temporal characteristics such as the worst case run time (WCET), period, deadlines, and so on. The tasks used perform a simple processing (multiplication of the elements of an array of integers) until a certain execution time (AET) - which is described by the function usertask_actualexec - then go into standby mode until 'to their reactivation (usertask_sieepexec) when they reach their period.
  • AET execution time
  • N_utiis.c This contains for example functions of initialization, preemption of tasks, allocation of tasks to the processors, sorting, display, etc.
  • the pmjdrivers directory contains the lowest level functions on which the scheduler relies. More specifically, these are the prototype files inte__ core 5_m520.h, pm_typedef.h and the file pm_cpufreq.c.
  • the prototype file intel_core_i5_m520.h contains the description of the target platform whose name it bears. This is information about the number of processors in the platform and the possible frequencies for each of the cores, represented as states.
  • the types used e.g. the processor states
  • the pm_cpufreq.c file contains in particular the functions allowing the effective dynamic frequency change on the runtime platform, relying on Linux CPUfreq,
  • the dynamic change of frequency voltage is managed by policies called governors (governors) under Linux, known in themselves. More precisely, to be able to change the frequency (s) processor (s), one must first use a governor said userspace. A first function set_governor is present for this. Then, the interaction with the Linux frequency change driver is done via exchange files located in / sys / devices / system / cpu / cpuX / cpufreq / where X represents the number of the concerned processor.
  • the open_cpufreq and close_cpufreq functions allow you to open and close the scaling_set $ peed interchange file.
  • the change of frequency is done by the function CPU_state_manager_apply_state.
  • the scheduling method of the invention can be applied to a single-processor platform and / or not implement mechanisms for reducing consumption.
  • a scheduler according to the invention can be implemented under another operating system compatible with the POSIX standard and more specifically with the IEEE POSIX 1c standard (or, which is equivalent, POSIX 1003.1c), provided that it supports an equivalent of APis CPU Affinity (for multiprocessor applications) and CPUfreq (for applications using dynamic frequency voltage change).
  • DSF Deterministic Stretch ⁇ to ⁇ Fit
  • AsDPM Automatic Dynamic Power Management
  • the reproduction of a task scheduler under user space deadlines constraints as described above is only feasible under certain conditions that depend on the operating system (called OS for Operating System). System). These conditions are for example the following.
  • OS Operating System
  • the method according to the invention must support the notion of preemptive scheduling in user space.
  • the explicit preemption of a task from user space is a feature that is not feasible in all OSs.
  • the task template must include maturity and state constraint attributes. The ability to extend the task template with these attributes, accessible in user mode, depends on the OS.
  • the method according to the invention does not require any explicit intervention of the supervisor mode and is based exclusively on user mode mechanisms in the form of APis.
  • the standard POS! X Linux (pthread) task structure can be generalized to a standard POSIX (thread) task structure under an OS that enables it to be realized.
  • the implementation of the invention for this generalized task structure requires, like the POSIX Linux standard task structure (pthread) of Figure 1, their extension by parameters accessible in user space.
  • the information concerning the temporal characteristics and constraints of the tasks are added in a specific structure that extends the type of POSIX standard task by parameters made accessible in user space.
  • the method of preemption of POSIX tasks in user space implemented in the invention does not exist in so-called "general public" OS, that is to say devoid of real-time constraints.
  • the scheduler in general, to control the execution of the application tasks, the scheduler requires an explicit preemption function for POSIX tasks in user space, which is based on the use of two specific functions for the suspension. and the resumption of a task.
  • fioat aet // effective execution time (wcet ⁇ aet ⁇ bcet) fioat ret; // RET: execution time, remaining
  • fioat deadline // absolute deadline float next_deadiine; // relative (deadiine)
  • total_eet [job-> id-1] total_eetjjob-> id-1] + (abs_eet [job-> id-1] / SRjotalfjob-
  • job-> preemptdelay sched_time - job-> last__preempt_time
  • job-> rei job-> aet - total__eet [job->id-1];
  • job-> ret G ° b-> aet - totai_eet [job-> id-1]);
  • job-> ret job-> aet
  • printf ⁇ 'T% d is assigned to CPU% d ⁇ n', job ⁇ > id, P-> cpu_id);
  • job-> last_preempt_time job-> begintime
  • begin__sched_time get_time ();
  • randomvalue ⁇ doub! e) rand () / RAND_MAX;
  • task-> aet (randomvalue * (task-> wcei-task-> bcet) + task-> bcet);
  • task-> aet (randomvalue * ⁇ task-> wcet-task-> bcet) + task-> bcet);
  • task-> next_deadline task-> next__persod + task-> deadline;
  • task-> next_period task-> nexi_period + task-> period; task-> preemptdelay - 0 .;
  • heap k-> state a n ex ist in g;
  • CPU _CLR (i, &mask);
  • proc_id job-> id, proc_id, wcet_Fn_1 rjob-> id-1], SRJotaifprocjd], estimated__WCET_Fn_1 [procjd]);
  • procjd job-> id, procjd, aet_Fn_1 [job-> id-1], SR_totai [proc_id], estimated_AET_Fn_1 [proc_id]);
  • t_available_Fn_1 [proc_id] wcet_Fn_1 [job-> id-1] + slack_local [proc_id];
  • proc_id job-> id, procjd, wcetJFn_1 [job-> id-1], slackjocalfprocjd], t_avaHable_FnJ [procjd]);
  • procjd proc_id, procjd, t_available_Fn_1 [proc_id], estimated_WCET_Fn J [procjd], SRJocal [proc_idj);
  • wceLFnJ [job ->! d-1] (float) (wcel_Fn ⁇ 1 [job-> id-1] * SR_local (proc_id));
  • aet_Fn_1 [job-> id-1] (double) (aet_Fn_1 [job-> id-1] * SRJocal [proc_id]);
  • CPU_state_manager apply_state ⁇ proc_id, &State);
  • SRJotallproc jd] SR_total [proc_id] * SR_local [procjd];
  • nb_exec [i] 0;
  • IDLE_to_RUNN NG! 0;
  • printffWARNiNG T% d IS ALLOCATED MORE THAN ONE CPU ⁇ n ", T [i] -> id);
  • iist [i] iist [min]
  • tickCount ++ task-> begintime - sched_time

Abstract

A method for scheduling tasks with deadline constraints, based on a model of independent periodic tasks and carried out in the user space by means of API POSIX.

Description

PROCEDE D'ORDONNANCEMENT AVEC CONTRAINTES D'ECHEANCE, EN PARTICULIER SOUS LINUX, REALISE EN ESPACE UTILISATEUR  METHOD OF SCHEDULING WITH DELAY CONSTRAINTS, ESPECIALLY IN LINUX, REALIZED IN USER SPACE
L'invention porte sur un procédé permettant la réalisation d'un ordonnancement monoprocesseur, multiprocesseur ou muiticœurs de tâches avec contraintes d'échéances. Ordonnanceur de tâches avec contraintes d'échéances signifie que chaque tâche possède un temps de terminaison qu'elles ne doit pas dépasser. L'ordonnancement réalisé supporte l'exécution sur un seul ou plusieurs processeurs. Il est réalisé sous Linux, ou tout autre système d'exploitation supportant la norme POSIX, et plus particulièrement son extension POSIX.1c, mais n'est pas intégré au noyau : il fonctionne en effet en espace utilisateur.  The invention relates to a method for performing a single processor, multiprocessor scheduling or multi-task scheduling with time constraints. Task Scheduler with Deadline Constraints means that each task has a completion time that must not exceed. The scheduling performed supports execution on one or more processors. It is run under Linux, or any operating system that supports the POSIX standard, and in particular its POSIX.1c extension, but is not integrated with the kernel: it works in user space.
L'espace utilisateur est un mode de fonctionnement pour les applications utilisateur, par opposition à l'espace noyau qui est un mode de fonctionnement superviseur qui dispose de fonctionnalités avancées ayant tous les droits, en particulier celui d'accéder à toutes les ressources d'un microprocesseur. La manipulation des ressources est néanmoins rendue possible en espace utilisateur par des fonctionnalités dites API (« Application Programming Interface », c'est-à-dire Interfaces de Programmation d'Applications), qui elles-mêmes s'appuient sur l'utilisation de pilotes périphériques. Le développement d'une solution en espace utilisateur rend possible et facilite le développement d'ordonnanceurs à contraintes temporelles par rapport à une intégration directe dans le noyau du système d'exploitation qui est rendue très difficile par sa complexité. Un autre avantage est d'apporter une stabilité accrue car une erreur d'exécution en espace utilisateur n'entraîne pas de conséquence grave pour l'intégrité du reste du système.  User space is a mode of operation for user applications, as opposed to kernel space which is a supervisory mode of operation that has advanced features with all rights, especially that of accessing all the resources of a microprocessor. The manipulation of resources is nevertheless made possible in user space by so-called API ("Application Programming Interface") functions, which themselves rely on the use of Peripheral drivers. The development of a solution in user space makes it possible and facilitates the development of schedulers with temporal constraints compared to a direct integration into the operating system kernel which is made very difficult by its complexity. Another advantage is to bring increased stability because a runtime error in user space does not have a serious consequence for the integrity of the rest of the system.
Pour réaliser un ordonnancement en espace utilisateur, le procédé de l'invention s'appuie en particulier sur les mécanismes définis par la norme POSIX {« Portable Operating System Interface - X », c'est-à-dire « interface portable pour les systèmes d'exploitation », le « X » exprimant l'héritage Unix) et son extension POSIX.1 c, et notamment sur la structure de tâche POSIX, les threads POSIX (« pthreads ») et les API de gestion de tâche POSIX. Il convient de rappeler qu'un processus est une instance de programme en cours d'exécution, une « tâche » est un découpage d'un processus et un « thread » est la réalisation d'une tâche sous Linux aux moyen des API POSIX ; un thread ou tâche ne peut pas exister sans processus, mais il peut y avoir plusieurs threads ou tâches par processus. On pourra se rapporter à ce propos à l'ouvrage de Robert Love « LINUX System programming », O'ReiNy, 2007. In order to carry out a scheduling in user space, the method of the invention relies in particular on the mechanisms defined by the POSIX standard ("Portable Operating System Interface - X", ie "portable interface for the systems "X" expressing the Unix inheritance) and its POSIX.1c extension, including the POSIX task structure, POSIX threads ("pthreads") and POSIX task management APIs. It should be remembered that a process is a running program instance, a "task" is a division of a process and a "Thread" is the realization of a task under Linux using POSIX APIs; a thread or task can not exist without processes, but there can be multiple threads or tasks per process. We can refer to this in Robert Love's book "LINUX System programming", O'ReiNy, 2007.
Linux est un système où la gestion du temps est dite à « temps partagé », par opposition à une gestion « temps réel ». Dans ce type de gestion, plusieurs facteurs liés au fonctionnement du système comme le partage des ressources, la gestion des entrées/sorties, les interruptions, l'influence de la mémoire virtuelle, etc. entraînent des incertitudes temporelles sur les temps d'exécution des tâches. Dès lors, il n'est pas possible de garantir une solution temps réel « dur » (en anglais « hard real time »), c'est-à-dire où l'exécution de chaque tâche est remplie dans des limites temporelles strictes et inviolables. La réalisation de l'ordonnanceur proposé convient néanmoins pour des systèmes temps réel dit « souple » (en anglais « soft real time »), ou « avec contraintes d'échéance », qui acceptent des variations dans le traitement des données de l'ordre de la seconde au maximum. C'est le cas d'un grand nombre d'applications temps réel, par exemple multimédia.  Linux is a system where time management is called "time share" as opposed to "real time" management. In this type of management, several factors related to the operation of the system such as resource sharing, input / output management, interruptions, the influence of virtual memory, etc. cause temporal uncertainties about task execution times. Therefore, it is not possible to guarantee a hard real time solution, that is to say where the execution of each task is performed within strict time limits and inviolable. The realization of the proposed scheduler is nevertheless suitable for real-time systems known as "soft" (in English "soft real time"), or "with constraints of expiry", which accept variations in the processing of data order from second to maximum. This is the case of a large number of real-time applications, for example multimedia.
N'étant pas nativement un système temps réel, plusieurs solutions techniques sont cependant possibles pour rendre ie noyau Linux compatible avec les contraintes temps réel. La solution la plus courante consiste à lui associer un noyau temps réel auxiliaire disposant d'un véritable ordonnanceur temps réel (RT Linux, RTAI, XENOMAI). Ce noyau temps réel auxiliaire est prioritaire et traite directement les tâches temps réel. Les autres tâches (non temps réel) sont déléguées au noyau standard Linux qui est considéré comme une tâche (ou un travail) de fond moins prioritaire. Cependant, le développement d'un ordonnanceur spécifique intégré au noyau est très difficile puisqu'il nécessite une connaissance approfondie du noyau et implique des développements noyau lourds, avec toute la complexité et instabilité que cela entraîne. Par ailleurs, les extensions temps réel Linux sont supportées par un nombre plus réduit de plateformes. La solution proposée permet une réalisation en espace utilisateur de ce fait plus simple, plus stable et applicable à n'importe quelle plateforme supportant un noyau Linux standard. Cette approche ne permet pas de garantir les contraintes temps réel dures, mais simplifie considérablement le développement d'un ordonnanceur tout en convenant pour les applications temps réei souple (la grande majorité des applications). Not being natively a real-time system, several technical solutions are however possible to make the Linux kernel compatible with real-time constraints. The most common solution is to associate an auxiliary real-time kernel with a real real-time scheduler (RT Linux, RTAI, XENOMAI). This auxiliary real-time kernel is a priority and deals directly with real-time tasks. Other tasks (non-real time) are delegated to the standard Linux kernel which is considered a lower priority task (or job). However, the development of a specific kernel-specific scheduler is very difficult since it requires a thorough knowledge of the kernel and involves heavy kernel developments, with all the complexity and instability that entails. In addition, Linux real-time extensions are supported by a smaller number of platforms. The proposed solution allows a realization in user space of this fact simpler, more stable and applicable to any platform supporting a standard Linux kernel. This approach does not guarantee time constraints real hard, but greatly simplifies the development of a scheduler while suitable for applications real time flexible (the vast majority of applications).
Un procédé d'ordonnancement selon l'invention convient en particulier à ia mise en œuvre de politiques d'ordonnancement orientées faible consommation, par exemple une politique EDF (« earliest deadline first », c'est- à-dire de priorité à l'échéance la plus proche) - qui est un type particulier d'ordonnancement avec contraintes d'échéances - avec changement dynamique de tension-fréquence (DVFS) afin de minimiser la consommation, ce qui est important dans les applications embarquées. Voir à ce propos les articles suivants :  A scheduling method according to the invention is particularly suitable for the implementation of low-consumption oriented scheduling policies, for example an EDF policy ("earliest deadline first", that is to say, priority over nearest deadline) - which is a special type of scheduling with time constraints - with dynamic voltage-frequency change (DVFS) to minimize consumption, which is important in embedded applications. See the following articles on this topic:
R. Chéour et al. « EDF scheduier technique for wireless sensors networks: case study » Fourth International Conférence on Sensing Technology, 3 - 5 juin 2010, Lecce, Italie;  R. Chéour et al. Fourth International Conference on Sensing Technology, 3 - 5 June 2010, Lecce, Italy;
- R. Chéour et al. « Exploitation of the EDF scheduling in the wireless sensors networks », Measurement Science Technology Journal (IOP), Spécial Issue on Sensing Technology: Devices, Signais and Materials, 2011.  - R. Chéour et al. "Exploitation of the EDF scheduling in the wireless sensors networks," Measurement Science Technology Journal (IOP), Special Issue on Sensing Technology: Devices, Signals and Materials, 2011.
Il existe de nombreux travaux sur l'ordonnancement faible consommation, mais qui restent pour l'essentiel théoriques et ne sont pratiquement jamais intégrés à un système d'exploitation à cause de la complexité de ce travail. La solution proposée, comme elle est entièrement réalisée en espace utilisateur, facilite grandement cette intégration.  There is much work on low-power scheduling, but it is largely theoretical and almost never integrated into an operating system because of the complexity of this work. The proposed solution, as it is entirely realized in user space, greatly facilitates this integration.
Un objet de l'invention est donc un procédé d'ordonnancement de tâches avec contraintes d'échéances, basé sur un modèle de tâches périodiques indépendantes et réalisé en espace utilisateur, dans lequel :  An object of the invention is therefore a task scheduling method with deadlines constraints, based on a model of independent periodic tasks and realized in user space, in which:
• chaque tâche à ordonnancer est associée à une structure de données, définie en espace utilisateur et contenant au moins une information temporelle (échéance et/ou période) et une information indicative d'un état d'activité de ia tâche, ledit état d'activité étant choisi dans une liste comprenant au moins :  Each task to be scheduled is associated with a data structure, defined in user space and containing at least one temporal information (time and / or period) and information indicative of a state of activity of the task, said state of activity being selected from a list comprising at least:
- un état de tâche en exécution ;  - a task state in execution;
- un état de tâche en attente de la fin de sa période d'exécution ; et - un état de tâche prête à être exécutée, en attente d'une condition de reprise; - a task state waiting for the end of its execution period; and a task state ready to be executed, waiting for a recovery condition;
• au cours de son exécution, chaque tâche modifie ladite information indicative de son état d'activité et ie cas échéant, en fonction d'une politique d'ordonnancement prédéfinie, appelle un ordonnanceur qui est exécuté en espace utilisateur ;  During its execution, each task modifies said information indicative of its state of activity and, if necessary, according to a predefined scheduling policy, calls a scheduler that is executed in user space;
• à chaque appel, ledit ordonnanceur :  • at each call, said scheduler:
- établit une file d'attente des tâches prêtes à être exécutées, en attente d'une condition de reprise;  - establishes a queue of tasks that are ready for execution, waiting for a recovery condition;
- trie ladite file d'attente en fonction d'un critère de priorité prédéfini ; - sorts said queue according to a predefined priority criterion;
- si nécessaire, préempte une tâche en exécution en lui envoyant un signal la forçant à passer dans ledit état de tâche prête à être exécutée, en attente d'une condition de reprise ; et if necessary, preempts a task in execution by sending a signal forcing it to pass into said task state ready to be executed, waiting for a resume condition; and
- envoie ladite condition de reprise au moins à la tâche se trouvant en tête de ladite file d'attente  - sends said resume condition at least to the task at the head of said queue
Selon différents modes de réalisation du procédé de l'invention :  According to various embodiments of the method of the invention:
Ladite politique d'ordonnancement peut être une politique préemptive, par exemple choisie parmi une politique de type EDF et ses dérivés tels que DSF {« Deterministic Stretch to Fit », c'est-à-dire « extension et ajustement déterministe de la fréquence ») et AsDPM (« Assertive Dynamic Power Management », ou « politique assertive de gestion dynamique des modes repos »), RM (« Rate Monotonie », ou « ordonnancement à taux monotone »), DM {« Deadiine Monotonie » ou « ordonnancement en fonction de l'échéance »), LLF (« Least Laxity First » ou ordonnancement en fonction de la laxité la plus faible).  Said scheduling policy may be a pre-emptive policy, for example chosen from a policy of the EDF type and its derivatives such as DSF ("Deterministic Stretch to Fit", that is to say "extension and deterministic adjustment of the frequency" ) and AsDPM ("Assertive Dynamic Power Management"), RM ("Rate Monotonic" or "Monotonic Rate Scheduling"), DM {"Deadiine Monotonic" or "Scheduling in maturity function "), LLF (" Least Laxity First "or scheduling based on the lowest laxity).
Le procédé peut être mis en œuvre dans une plateforme multiprocesseur, ladite structure de données comprenant également une information relative à un processeur auquel la tâche correspondante est affectée, et dans lequel ledit ordonnanceur affecte à un processeur du système chaque tâche prête à être exécutée. Ledit ordonnanceur peut modifier la fréquence d'horloge et la tension d'alimentation du ou d'au moins un processeur en fonction d'une politique DVFS. The method can be implemented in a multiprocessor platform, said data structure also comprising information relating to a processor to which the corresponding task is assigned, and wherein said scheduler assigns to a system processor each task ready to be executed. Said scheduler can change the clock frequency and the supply voltage of the or at least one processor according to a DVFS policy.
Le procédé peut comporter une étape d'initialisation, au cours de laquelle :  The method may include an initialization step, during which:
- les tâches à ordonnancer sont créées, affectées à un même processeur et placées dans un état d'attente d'une condition de reprise, une variable globale dite de rendez-vous étant incrémentée ou décrémentée lors de la création de chaque dite tâche ;  the tasks to be scheduled are created, allocated to the same processor and placed in a waiting state of a recovery condition, a global variable called an appointment being incremented or decremented during the creation of each said task;
- lorsque ladite variable de rendez-vous prend une valeur prédéfinie indiquant que toutes les tâches ont été créées, ledit ordonnanceur est exécuté pour la première fois.  when said appointment variable takes a predefined value indicating that all the tasks have been created, said scheduler is executed for the first time.
Ladite structure de données peut contenir également des informations indicatives d'un pthread associé à ladite tâche et à son temps d'exécution dans le pire cas.  Said data structure may also contain information indicative of a pthread associated with said task and its execution time in the worst case.
Le procédé peut être exécuté sous un système d'exploitation compatible avec une norme POSIX.Ic, qui peut en particulier être un système Linux. Dans ce cas :  The method can be run under an operating system compatible with a POSIX.Ic standard, which can in particular be a Linux system. In that case :
A chaque appel de l'ordonnanceur, un «pthread» est crée pour assurer son exécution.  At each call of the scheduler, a "pthread" is created to ensure its execution.
Le procédé peut comporter l'utilisation d'un MUTEX pour assurer l'exécution d'une seule instance à la fois de l'ordonnanceur.  The method may include using a MUTEX to perform a single instance of the scheduler at a time.
L'affectation d'une tâche à un processeur peut être effectuée au moyen de l'API CPU Affinity.  Assigning a task to a processor can be done using the CPU Affinity API.
Un autre objet de l'invention est un produit programme d'ordinateur pour la mise en œuvre d'un procédé d'ordonnancement selon l'une des revendications précédentes.  Another object of the invention is a computer program product for implementing a scheduling method according to one of the preceding claims.
D'autres caractéristiques, détails et avantages de l'invention ressortiront à fa lecture de la description faite en référence aux dessins annexés donnés à titre d'exemple et qui représentent, respectivement :  Other features, details and advantages of the invention will be apparent from the description given with reference to the accompanying drawings given by way of example and which represent, respectively:
La figure 1 , le principe de réalisation d'un procédé d'ordonnancement selon l'invention en espace utilisateur Linux ; La figure 2, un exemple d'un ensemble de tâches périodiques indépendantes ; et FIG. 1, the principle of implementing a scheduling method according to the invention in Linux user space; Figure 2, an example of a set of independent periodic tasks; and
La figure 3, un diagramme des états et des transitions des tâches applicatives dans un procédé d'ordonnancement selon l'invention.  FIG. 3, a diagram of the states and transitions of the application tasks in a scheduling method according to the invention.
Le principe de réalisation d'un ordonnanceur en espace utilisateur selon un mode de réalisation de l'invention, basé sur un système d'exploitation Linux, est illustré à la figure 1 . Une application est vue comme un ensemble de tâches à ordonnancer 2. L'ordonnanceur 1 contrôle l'exécution de ces tâches au moyen de trois fonctions spécifiques (référence 3) : « prempt(Task) », « resume(Task) » et « run_on(Task, CPU) ». Ces fonctions - dont les noms sont arbitraires et donnés uniquement à titre d'exemple non limitatif - permettent la préemption d'une tâche, la reprise d'une tâche et l'allocation d'une tâche à un processeur, respectivement. Elles s'appuient sur l'utilisation de fonctionnalités fournies par les API Linux (référence 5) permettant le contrôle des tâches et des processeurs depuis l'espace utilisateur.  The principle of realizing a user space scheduler according to one embodiment of the invention, based on a Linux operating system, is illustrated in FIG. An application is seen as a set of tasks to be scheduled 2. The scheduler 1 controls the execution of these tasks by means of three specific functions (reference 3): "prempt (Task)", "resume (Task)" and " run_on (Task, CPU) ". These functions - whose names are arbitrary and given only as a non-limiting example - allow the preemption of a task, the recovery of a task and the allocation of a task to a processor, respectively. They rely on the use of features provided by the Linux API (reference 5) to control tasks and processors from the user space.
Pour son fonctionnement, l'ordonnanceur doit disposer d'informations spécifiques qui découlent du modèle de tâches utilisé. Pour un modèle de tâches périodiques et indépendantes, il s'agit au moins d'informations relatives à l'échéance de chaque tâche, sa période et son état d'activité ; d'autres informations temporelles peuvent également être fournies : pire cas de temps d'exécution (WCET, de l'anglais « Worst Case Execution Time), échéance ultérieure (c'est-à-dire : temps courant plus échéance - pour éviter une confusion avec Γ « échéance ultérieure », « échéance » est parfois appelée « échéance absolue »), etc. Dans le cas d'une plateforme multiprocesseur, l'ordonnanceur nécessite également une information indicative du processeur auquel la tâche est affectée. En outre, chaque tâche est associée à un thread POSIX {«pthread»), qui doit également être connu de l'ordonnanceur.  For its operation, the scheduler must have specific information that derives from the task model used. For a model of periodic and independent tasks, it is at least information relating to the expiry of each task, its period and its state of activity; other temporal information can also be provided: Worst Case Execution Time (WCET), later deadline (that is, current time plus due date) to avoid confusion with "late maturity", "maturity" is sometimes referred to as "absolute maturity"), etc. In the case of a multiprocessor platform, the scheduler also requires information indicative of the processor to which the task is assigned. In addition, each task is associated with a POSIX thread {"pthread"), which must also be known to the scheduler.
D'autres informations susceptibles d'être nécessaires à l'ordonnanceur sont : un UTEX associé à la tâche, une condition associée à la tâche, un identifiant Linux de la tâche.  Other information that may be needed by the scheduler is: a UTEX associated with the task, a condition associated with the task, a Linux identifier of the task.
Le meilleur cas du temps d'exécution (BCET, de l'anglais « Best Case Execution Time ») et loe temps d'exécution effectif (AET, de l'anglais « Actual Execution Time ») sont des informations qui, sans être indispensables pour réaliser l'ordonnancement, sont très utiles pour la mise au point car ils permettent (en particulier ΓΑΕΤ) de fixer le temps d'exécution d'une tâche (le temps d'exécution d'une tâche varie normalement d'une exécution à l'autre). Cela facilite la vérification de l'ordonnancement en permettant de le comparer à des résultats de simulations (avec des tâches ayant les mêmes paramètres et surtout exactement ies mêmes temps d'exécution). The best execution time (BCET) case and the actual execution time (AET, from "Actual Execution Time") are information that, although not essential for scheduling, are very useful for debugging because they allow (in particular ΓΑΕΤ) to set the execution time of a task (The execution time of a task normally varies from one execution to another). This makes it easier to check scheduling by comparing it to simulation results (with tasks having the same parameters and especially exactly the same execution times).
Dans ia structure de tâche standard POSIX Linux (pthread) ces informations ne sont pas accessibles en espace utilisateur. Pour cette raison, la mise en œuvre de l'invention nécessite une extension de cette structure de tâche, par la création d'un type personnalisé à l'aide d'une structure de données.  In the POSIX Linux standard task structure (pthread) this information is not accessible in user space. For this reason, the implementation of the invention requires an extension of this task structure, by creating a custom type using a data structure.
Le modèle applicatif est constitué de tâches périodiques et indépendantes. Un modèle de tâches périodiques se réfère à la spécification d'ensembies homogènes de travaux (« jobs ») qui se répètent à des intervalles périodiques ; un « travail » est une forme particulière de tâche, et plus précisément une tâche indépendante dont son exécution est strictement indépendante des résultats des autres « travaux ». Un modèle de tâches indépendantes se réfère à un ensemble de tâches pour lequel l'exécution d'une tâche n'est pas subordonnée à la situation d'une autre. Ce modèle supporte l'exécution synchrone et asynchrone de tâches, il est applicable dans u grand nombre d'applications réelles qui doivent respecter des contraintes temporelles. La connaissance des caractéristiques et contraintes temporelles des tâches (échéance, période, pire temps d'exécution, etc.) est donc nécessaire pour appliquer ce type d'ordonnancement. Comme expliqué plus haut, ces informations sont rajoutées dans une structure spécifique qui étend le type de tâche standard sous Linux.  The application model consists of periodic and independent tasks. A periodic task model refers to the specification of homogeneous sets of jobs ("jobs") that are repeated at periodic intervals; a "work" is a particular form of task, and more precisely an independent task whose execution is strictly independent of the results of the other "works". A model of independent tasks refers to a set of tasks for which the execution of one task is not subordinate to the situation of another. This model supports synchronous and asynchronous execution of tasks, it is applicable in a large number of real applications that must respect temporal constraints. Knowledge of the temporal characteristics and constraints of the tasks (due date, period, worst execution time, etc.) is therefore necessary to apply this type of scheduling. As explained above, this information is added to a specific structure that extends the standard task type under Linux.
La figure 2 illustre un modèle applicatif de tâches périodiques et indépendantes comprenant quatre tâches T1 - T4. Les tâches T1 et T2 doivent s'exécuter avant leur échéance (« deadiine » DDLN) de 21 ms (millisescondes), Figure 2 illustrates an application model of periodic and independent tasks comprising four tasks T1 - T4. Tasks T1 and T2 must run before their deadline ("deadiine" DDLN) of 21 ms (milliseconds),
T3 avant son échéance de 31 ms et T4 avant son échéance de 40 ms. Chaque tâche s'exécute en un temps effectif AET (« Actuai Execution Time »} f, qui est compris entre une valeur minimum BCET (« Best Case Execution Time ») et une valeur maximum WCET (« Worst Case Execution Time »). Au terme de son exécution, une tâche se met en attente jusqu'à atteindre sa période où elle devient à nouveau prête pour réexécution. Ainsi, une tâche prend à chaque instant un état parmi ies suivants : T3 before its maturity of 31 ms and T4 before its maturity of 40 ms. Each task executes in an effective time AET ("Actuai Execution Time"} f, which is between a minimum value BCET ("Best Case Execution Time") and a maximum value WCET ("Worst Case Execution Time"). At the end of its execution, a task goes on hold until it reaches its period when it becomes ready for rerun. For example, a task at any one time takes one of the following states:
- un état de tâche en exécution ;  - a task state in execution;
un état de tâche en attente de la fin de sa période d'exécution ; et  a task state waiting for the end of its execution period; and
un état de tâche prête à être exécutée, en attente d'une condition de reprise;  a task state ready for execution, waiting for a resume condition;
auxquels on peut ajouter un état de tâche inexistante, avant son actïvation.  to which one can add a state of non-existent task, before its activation.
L'exemple de la figure 2 se réfère en particulier à une application vidéo où chacune des quatre tâches correspond à un traitement particulier d'une image. Ces quatre tâches sont donc répétées toutes ies 40 ms (période) afin de pouvoir traiter 25 images par seconde.  The example of Figure 2 refers in particular to a video application where each of the four tasks corresponds to a particular treatment of an image. These four tasks are therefore repeated every 40 ms (period) in order to process 25 frames per second.
Une ou plusieurs files d'attente sont utilisées pour mémoriser les tâches. Au moins une file d'attente est nécessaire pour la réalisation d'un ordonnancement à contraintes d'échéances, pour mémoriser la liste des tâches prêtes. Une priorité est également associée à chaque tâche pour définir leur ordre d'exécution. Le critère de priorité dépend de la politique d'ordonnancement. Pour la politique EDF (« Earliest Deadline First », c'est-à- dire priorité à l'échéance la plus proche) par exemple, le critère de priorité est la proximité de l'échéance : la tâche ayant l'échéance la plus proche est la plus prioritaire. La file des tâches prêtes est généralement triée par ordre de priorité décroissante. Le critère de priorité considéré ici est différent des critères de priorité utilisés dans les ordonnanceurs Linux natifs (temps partagé). L'ordonnanceur est appelé à des instants précis, appelés instants d'ordonnancement, qui sont déclenchés par les tâches elles-mêmes à des moments caractéristiques de leur exécution. Ces moments sont appelés des événements de tâche (référence 2 sur la figure 1 ), ils correspondent par exemple à leur activation (« onActivate »), leur fin d'exécution (« onBlock »), leur réactivation (« onUnBlock ») ou leur terminaison (« onTerminate ») - ces noms étant arbitraires et donnés uniquement à titre d'exemple non limitatif. Les événements sont déclenchés, aux moments adéquats de l'exécution d'une tâche applicative, par des appels à des fonctions « onActivate() », « onBlock() », « onUnBlock() », « onTerminate() », insérés dans son code. Chaque événement de tâche met à jour les champs de ia structure de tâche pour la tâche concernée (état, échéance ultérieure, période, etc.) et appelle l'ordonnanceur si nécessaire. L'appel pu non de l'ordonnanceur par un événement de tâche dépend de la politique d'ordonnancement. Pour une politique EDF par exemple, l'ordonnanceur est appelé sur les événements « onActivate », « onBlock » et « onUnBlock ». One or more queues are used to store tasks. At least one queue is required for the execution of a scheduling with deadlines, to memorize the list of ready tasks. A priority is also assigned to each task to define their order of execution. The priority criterion depends on the scheduling policy. For the EDF (Earliest Deadline First) policy, for example, the priority criterion is the proximity of the deadline: the task with the most expiry date. near is the highest priority. The queue of ready tasks is generally sorted in descending order of priority. The priority criterion considered here is different from the priority criteria used in native Linux schedulers (timesharing). The scheduler is called at specific times, called scheduling instants, which are triggered by the tasks themselves at characteristic moments of their execution. These moments are called task events (reference 2 in figure 1), they correspond for example to their activation ("onActivate"), their end of execution ("onBlock"), their reactivation ("onUnBlock") or their termination ("onTerminate") - these names being arbitrary and given only by way of non-limiting example. The events are triggered, at the appropriate moments of the execution of an application task, by calls to functions "onActivate ()", "onBlock ()", "onUnBlock ()", "onTerminate ()", inserted in his code. Each task event updates the fields of the task structure for the task concerned (state, future due date, period, etc.) and calls the scheduler if necessary. The scheduler's non-scheduling call by a task event depends on the scheduling policy. For an EDF policy for example, the scheduler is called on the events "onActivate", "onBlock" and "onUnBlock".
L'événement « onActivate » correspond à la création d'une tâche. Il marque le passage de l'état « inexistante » à « prête ». Pour des raisons de synchronisation à la création des tâches (expliquées en 15.), l'événement onActivate incrémente, à la fin de son exécution, une variable « rendez-vous », puis met la tâche en attente d'une condition d'activation de l'ordonnanceur, par exemple en appelant la fonction « pthread_cond_wait » de ΓΑΡΙ « Pthread Condition Variable » de la norme POSiX.lc.  The event "onActivate" corresponds to the creation of a task. It marks the transition from "nonexistent" to "ready". For reasons of synchronization when creating tasks (explained in 15.), the onActivate event increments, at the end of its execution, an "appointment" variable, then puts the task on hold for a condition of activation of the scheduler, for example by calling the "pthread_cond_wait" function of the "Pthread Condition Variable" of the POSiX.lc standard.
L'événement « onBlock est » déclenché lorsqu'une tâche termine son exécution effective (AET). Elle passe alors à l'état « en attente » où elie se met en attente jusqu'à atteindre sa période. Lorsqu'elle atteint sa période, la tâche déclenche l'événement « onUnBlock », qui la fait passer à l'état « prêt », puis la met en attente d'une condition de reprise, par exemple en appelant la fonction POSIX pthread_cond_wait. Cette condition sera signalée par l'ordonnanceur au moyen de la foncion « pthread_cond_broadcast » qui appartient elle aussi à ΓΑΡΙ « Pthread Condition Variable » de ia norme POSiX.lc.  The onBlock event is triggered when a task completes its effective execution (AET). It then goes to the "waiting" state where elie waits until reaching its period. When it reaches its period, the task triggers the "onUnBlock" event, which makes it "ready", and then puts it on hold for a recovery condition, for example by calling the POSIX function pthread_cond_wait. This condition will be signaled by the scheduler by means of the function "pthread_cond_broadcast" which also belongs to the "Pthread Condition Variable" of the POSiX.lc standard.
A tout moment de son exécution, une tâche peut être préemptée par une autre tâche devenue plus prioritaire (dans le cas d'un algorithme EDF, parce que son échéance est devenue plus proche de celle de la tâche en cours d'exécution). La préemption d'une tâche la fait passer de l'état « en exécution » à « prête ». Réciproquement, la tâche qui reprend à la suite d'une préemption passe de l'état « prête » à l'état « en exécution ».  At any time during its execution, a task can be pre-empted by another task that has become more priority (in the case of an EDF algorithm, because its deadline has become closer to that of the task being executed). Preemption of a task changes it from "running" to "ready". Conversely, the task that resumes following a preemption goes from the "ready" state to the "in execution" state.
Les transitions entre états déclenchées par les événements de tâches sont illustrées par la figure 3. Pour éviter les phénomènes d'inter biocage, l'ordonnanceur n'est pas appelé directement par les événements de tâches mais par l'intermédiaire d'un «pthread». En d'autres termes, la fonction qui réalise un événement de tâche (onActivate, onBlock, onUnBlock, onTerminate) appelle une autre fonction « cail_scheduler » (nom donné à titre d'exemple non limitatif), qui crée un «pthread» d'ordonnancement pour exécuter l'ordonnanceur. State transitions triggered by task events are shown in Figure 3. To avoid the phenomena of interbiocage, the scheduler is not called directly by the events of tasks but by means of a "pthread". In other words, the function that performs a task event (onActivate, onBlock, onUnBlock, onTerminate) calls another function "cail_scheduler" (name given by way of non-limiting example), which creates a "pthread" of scheduling to execute the scheduler.
A chaque appel, l'ordonnanceur effectue, au moyen d'une fonction principale « select_ » (nom donné à titre d'exemple non limitatif) les actions suivantes : établissement d'une file d'attente des tâches prêtes, tri de la file d'attente par ordre de priorité décroissante (tâche la plus prioritaire en début de liste), préemption des tâches non prioritaires en cours d'exécution, allocation des tâches éligibles aux processeurs libres et démarrage des tâches éligibles par envoi d'une condition de reprise. La détermination des tâches éligible nécessite la connaissance par l'ordonnanceur de l'état de toutes les tâches existantes. La liste d'actions donnée ici correspond à un ordonnancement EDF, mais peut être modifiée - et en particulier enrichie - pour la réalisation d'autres politiques d'ordonnancement.  At each call, the scheduler performs, by means of a main function "select_" (name given as a non-limiting example) the following actions: establishment of a queue of ready tasks, sorting of the queue priority in decreasing order of priority (highest priority task at the beginning of the list), preemption of non-priority tasks in progress, allocation of eligible tasks to free processors and start of eligible tasks by sending a recovery condition . The determination of eligible tasks requires the scheduler to know the status of all existing tasks. The action list given here corresponds to an EDF scheduling, but can be modified - and in particular enriched - for the realization of other scheduling policies.
Du fait de l'exécution multitâche et multiprocesseur, plusieurs instances de l'ordonnanceur pourraient être appelées à s'exécuter en même temps . Pour empêcher une telle éventualité, un Mutex (de l'anglais « MUTual Exclusion device », c'est-à-dire dispositif d'exclusion mutuelle) est utilisé pour protéger certaines variables partagées comme la file d'attente des tâches prêtes. Le Mutex est verrouillé au début de l'exécution de l'ordonnanceur, puis libéré à la fin de l'appel à l'ordonnanceur. Ce procédé garantit qu'une seule instance de l'ordonnanceur ne s'exécute à la fois, ce qui permet la modification sans risque des variables partagées.  Due to multithreaded and multiprocessor execution, multiple instances of the scheduler might be required to run at the same time. To prevent such a possibility, a Mutex (MUTual Exclusion Device) is used to protect certain shared variables such as the queue of ready tasks. The Mutex is locked at the beginning of the scheduler run, and released at the end of the scheduler call. This process ensures that only one instance of the scheduler executes at a time, allowing the shared variables to be changed without risk.
La première exécution de l'ordonnanceur intervient juste après la création des tâches. Afin de garantir le contrôle des tâches par l'ordonnanceur, une synchronisation doit être réalisée pour être sûr que toutes les tâches ont bien fini d'être créées avant d'exécuter l'ordonnanceur et que les tâches ne démarrent pas leur exécution tant que l'ordonnanceur ne leur en a pas donné l'ordre. Pour cela, tout d'abord, une variable globale de type rendez- vous est initialisée à 0 avant la création des tâches. Ensuite, toutes les tâches sont créées et placées sur le premier processeur du système. Au tout début de leur création, en d'autres termes à l'exécution l'événement onActivate(), chaque tâche incrémente la variable « rendez-vous » puis suspend immédiatement son exécution en se mettant en attente d'une condition de reprise (utilisation de la fonction POSIX « pthread_cond_wait »). Lorsque la valeur de la variable « rendez-vous » est égale au nombre de tâches, l'ordonnanceur peut être exécuté. Au cours de cette exécution, l'ordonnanceur reprend l'exécution des tâches éligibles en leur signalant leur condition de reprise (fonction « pthread_cond_broadcast »). The first execution of the scheduler occurs just after the creation of the tasks. To ensure scheduler control of tasks, a synchronization must be performed to ensure that all tasks have finished being created before running the scheduler and that tasks do not start running until the job is completed. the scheduler did not give them the order. For this, first of all, a global variable of is initialized to 0 before creating jobs. Then all the tasks are created and placed on the first processor of the system. At the very beginning of their creation, in other words when the onActivate () event is executed, each task increments the "rendezvous" variable and immediately suspends its execution by waiting for a restart condition ( use of the POSIX function "pthread_cond_wait"). When the value of the variable "appointment" is equal to the number of tasks, the scheduler can be executed. During this execution, the scheduler resumes the execution of the eligible tasks by informing them of their recovery condition (function "pthread_cond_broadcast").
Pour contrôler l'exécution des tâches applicatives, l'ordonnanceur nécessite l'utilisation de deux fonctions spécifiques pour la suspension et la reprise d'une tâche, appelées respectivement « preempt(Task) » et « resume(Task) » sur la figure 1 (référence 3), ces noms étant donnés uniquement à titre d'exemple non limitatif. La fonction preempt(Task) se base sur l'utilisation de ΑΡΙ « Signa! » de la norme POSIX. Pour préempter une tâche, le signal S1GUSR1 est envoyé au pthread correspondant (fonction POSIX « pthread_kill »). Le gestionnaire de signal associé (« sigusrl ») met la tâche en attente d'une condition de reprise (« pthreadcond_wait ») dès la réception du signal. îl existe une condition de reprise pour chaque tâche de l'application. La fonction resume(Task) signale la condition de reprise adéquate à la tâche concernée pour reprendre son exécution. Elle est donc strictement équivalente à appeler la fonction Linux pthread_cond_broadcast ; en fait, la création d'une fonction spécifique se justifie essentiellement pour des raisons de lisibilité du code. Ce mécanisme exploite le fait qu'en pratique, le gestionnaire de signal SIGUSR1 est exécuté par le «pthread» Linux qui reçoit le signal. Ainsi, le gestionnaire de signai peut identifier le «pthread» à suspendre (nécessaire pour envoyer la condition d'attente au «pthread» concerné par la fonction pthread_cond_wait) en comparant par exemple son propre identifiant de processus (tid) avec celui de tous les « pthreads » de l'application. Il importe de noter que l'ordonnanceur n'utilise pas les mécanismes de préemption et reprise de tâche fournis par le noyau, car ces derniers ne sont pas accessibles en espace utilisateur. Pour contrôler l'exécution des tâches applicatives dans une plateforme multiprocesseur, l'ordonnanceur nécessite une fonction explicite permettant de fixer l'exécution d'une tâche sur un processeur donné. Bien qu'il existe une API pour ie contrôle des «pthread» Linux par les processeurs d'une plateforme multiprocesseur (« CPU affinity »), il n'existe pas de fonction spécifique pour l'allocation d'une tâche sur un processeur. La réalisation de cette fonction (indiquée par « run_on » sur la figure 1 , référence 3, ce nom étant donné à titre d'exempte non limitatif) affecte le processeur « CPU » à l'exécution de la tâche « Task » en s'appuyant sur ΓΑΡΙ Linux « CPU affinity ». Ceci se base sur la spécification du processeur CPU (uniquement) dans le masque d'affinité de la tâche Task, ce masque d'affinité est ensuite affecté à la tâche (par la fonction pthread_setaffinity_np de l'APi « CPU affinity »). Ensuite, pour garantir que l'ordonnanceur n'exécute jamais plus d'une tâche par processeur, on empêche toutes les autres tâches applicatives d'utiliser le processeur CPU, On vérifie par ailleurs qu'une tâche n'est jamais affectée à plus d'un seul processeur. To control the execution of the application tasks, the scheduler requires the use of two specific functions for the suspension and the resumption of a task, called respectively "preempt (Task)" and "resume (Task)" in figure 1 (reference 3), these names being given solely by way of non-limiting example. The preempt function (Task) is based on the use of ΑΡΙ "Signa! Of the POSIX standard. To preempt a task, the signal S1GUSR1 is sent to the corresponding pthread (POSIX function "pthread_kill"). The associated signal handler ("sigusrl") puts the job on hold for a resume condition ("pthread " cond_wait ") upon receipt of the signal. There is a recovery condition for each task in the application. The resume function (Task) reports the proper resume condition to the task in question to resume execution. It is therefore strictly equivalent to calling the Linux function pthread_cond_broadcast; in fact, the creation of a specific function is essentially justified for the sake of legibility of the code. This mechanism exploits the fact that in practice, the signal handler SIGUSR1 is executed by the Linux "pthread" which receives the signal. Thus, the signai manager can identify the "pthread" to suspend (necessary to send the waiting condition to the "pthread" concerned by the function pthread_cond_wait) by comparing for example its own process identifier (tid) with that of all "Pthreads" of the application. It is important to note that the scheduler does not use the preemption and job recovery mechanisms provided by the kernel because they are not accessible in user space. To control the execution of application tasks in a multiprocessor platform, the scheduler requires an explicit function to fix the execution of a task on a given processor. Although there is an API for controlling Linux "pthreads" by the CPUs of a CPU Affinity, there is no specific function for allocating a task to a processor. The realization of this function (indicated by "run_on" in FIG. 1, reference 3, this name being given as a non-limiting example) assigns the "CPU" processor to the execution of the "Task" task in its entirety. pressing ΓΑΡΙ Linux "CPU affinity". This is based on the specification of the CPU processor (only) in the affinity mask of the Task task, this affinity mask is then assigned to the task (by the function pthread_setaffinity_np of the APi "CPU affinity"). Then, to guarantee that the scheduler never executes more than one task per processor, all other application tasks are prevented from using the processor CPU. It is also verified that a task is never assigned to more than one task. a single processor.
Une fonction spécifique « change_freq » (nom donné uniquement à titre d'exemple non limitatif) est utilisée dans le cas d'un ordonnanceur utilisant les techniques DVFS pour contrôler le changement dynamique de tension/fréquence des processeurs (figure 1 , référence 3). Cette fonction utilise une APi Linux appelée « CPUFreq » qui permet de changer la fréquence de chaque processeur par l'intermédiaire du système de fichier virtuel « sysfs ». il suffit par exemple d'écrire la fréquence désirée dans un fichier - le fichier « /sys/devices/system/cpu/cpuO/cpufreq/scaling_setspeed » sous Linux - pour modifier la fréquence du processeur 0. L'ordonnanceur peut utiliser la fonction change_freq pour permettre la modification de la fréquence en récrivant dans le système de fichier. Il nécessite par ailleurs d'utiliser préalablement le gouverneur « Userspace » qui est ie seul mode DVFS sous Linux qui permette à un utilisateur ou à une application de changer la fréquence processeur à volonté. « Userspace » est un des cinq gouverneurs DVFS Unix et son nom indique que la fréquence (et donc la tension) peuvent être modifiées à volonté par l'utilisateur. Une technique possible d'ordonnancement à faible consommation consiste à exploiter le « siack » dynamique (temps fourni par une tâche après son exécution) pour ajuster la fréquence et la tension de fonctionnement du ou des processeurs (DVFS) afin d'économiser l'énergie tout en offrant des garanties de délai. Par exemple, l'ordonnanceur de l'invention a été appliqué à la réalisation d'un ordonnancement de type « DSF » (« Deterministic Stretch~to-Fit », dans lequel la fréquence des processeurs (et donc la tension, qui en dépend) est recalculée à chaque événement d'ordonnancement, en utilisant le temps d'exécution réel (AET - « Actual Execution Time ») pour les tâches accomplies et le pire délai d'exécution (WCET - « Worst Execution Time ») pour les autres, pour allouer le temps de battement de la tâche précédente à la tâche suivante, ce qui permet de diminuer la fréquence de fonctionnement du processeur concerné. A specific function "change_freq" (name given only as a non-limiting example) is used in the case of a scheduler using DVFS techniques to control the dynamic change of voltage / frequency of the processors (Figure 1, reference 3). This function uses a Linux APi called "CPUFreq" which allows to change the frequency of each processor via the virtual file system "sysfs". For example, simply write the desired frequency to a file - the file "/ sys / devices / system / cpu / cpuO / cpufreq / scaling_setspeed" on Linux - to change the frequency of processor 0. The scheduler can use the function change_freq to allow the modification of the frequency by writing back to the file system. It also requires the prior use of the "Userspace" governor which is the only DVFS mode under Linux that allows a user or an application to change the processor frequency at will. "Userspace" is one of the five DVFS Unix governors and its name indicates that the frequency (and therefore the voltage) can be modified at will by the user. A possible low-power scheduling technique is to exploit the dynamic "siack" (time provided by a task after execution) to adjust the frequency and operating voltage of the processor (s) (DVFS) to save power while offering time guarantees. For example, the scheduler of the invention has been applied to the realization of a "DSF"("Deterministic Stretch ~ to-Fit" type scheduling, in which the frequency of the processors (and therefore the voltage, which depends on it) ) is recalculated at each scheduling event, using Actual Execution Time (AET) for completed tasks and Worst Execution Time (WCET) for others. , to allocate the beat time of the previous task to the next task, which reduces the operating frequency of the processor concerned.
L'annexe contient le code source, écrit en langage C, d'un programme d'ordinateur permettant la mise en oeuvre d'un procédé d'ordonnancement selon l'invention, utilisant la technique DSF-DVFS décrite ci- dessus. Ce code est donné uniquement à titre d'exemple non limitatif.  The appendix contains the source code, written in C language, of a computer program for implementing a scheduling method according to the invention, using the DSF-DVFS technique described above. This code is given only by way of non-limiting example.
Le code se compose d'une pluralité de fichiers contenus dans deux répertoires : DSF_Scheduler et pm_drivers.  The code consists of a plurality of files contained in two directories: DSF_Scheduler and pm_drivers.
Le répertoire DSF_Scheduler contient le c ur du code de l'ordonnanceur DSF. Il comprend notamment le fichier prototype N_scheduler,h qui contient ia définition des types nécessaires (structure de tâche, structure processeur, etc.), les variables globales (liste de tâches, liste de processeurs, paramètres de l'ordonnanceur, etc.), et tes prototypes des fonctions exportées.  The DSF_Scheduler directory contains the core of the DSF scheduler code. It includes, in particular, the N_scheduler prototype file, h which contains the definition of the necessary types (task structure, processor structure, etc.), the global variables (task list, processor list, scheduler parameters, etc.), and your prototypes of the exported functions.
Le fichier DSF_Scheduler.c décrit ensuite les fonctions principales de l'ordonnanceur. La plus importante est la fonction select_, c'est elle qui décrit tout le processus d'ordonnancement réalisé à chaque instant d'ordonnancement. Les événements de tâches onActivate, onBlock, onUnBlock et onTerminate qui déclenchent les instants d'ordonnancement sont également décrits dans ce fichier. La fonction d'ordonnancement s'appuie sur une fonction slow_down qui calcule et applique le changement de fréquence. Enfin, les particularités de l'ordonnanceur au démarrage de l'application ont nécessité le développement d'une fonction spécifique start_sched. Celie-ci ne s'exécute qu'une seule fois au tout début du lancement de l'application et se base en grande partie sur le code de l'ordonnanceur principal (select_). Pour tous les autres instants d'ordonnancement, c'est la fonction select_ qui est appelée, via la fonction call_scheduter. Le fichier Makefile est utilisé pour la compilation, The DSF_Scheduler.c file then describes the main functions of the scheduler. The most important is the select_ function, which describes the entire scheduling process performed at each scheduling time. OnActivate, onBlock, onUnBlock, and onTerminate task events that trigger scheduling times are also described in this file. The scheduling function relies on a slow_down function that calculates and applies the frequency change. Finally, the particularities of the scheduler at the start of the application required the development of a specific function start_sched. This one does not run only once at the very beginning of the launch of the application and is largely based on the code of the main scheduler (select_). For all other scheduling times, the select_ function is called via the call_scheduter function. The Makefile file is used for compilation,
L'application à ordonnancer est décrite dans le fichier The application to schedule is described in the file
N_application.c. C'est aussi le fichier contenant le « main » du programme. Celui-ci réalise les diverses initialisations nécessaires, créé les tâches POSIX, lance l'ordonnanceur et synchronise le tout. Il convient de préciser ici qu'une application est vue comme un ensemble de tâches, chacune comportant des caractéristiques temporelles telles que le pire cas du temps d'exécution (WCET), la période, les échéances (deadlines), etc. Les tâches utilisées effectuent un traitement simple (multiplication des éléments d'un tableau d'entiers) jusqu'à un certain temps d'exécution (AET) - ce qui est décrit par la fonction usertask_actualexec - puis se mettent en mode d'attente jusqu'à leur réactivation (usertask_sieepexec) lorsqu'elles atteignent leur période. N_application.c. It is also the file containing the "hand" of the program. This one carries out the necessary initializations, creates the POSIX tasks, launches the scheduler and synchronizes the whole thing. It should be noted here that an application is viewed as a set of tasks, each with temporal characteristics such as the worst case run time (WCET), period, deadlines, and so on. The tasks used perform a simple processing (multiplication of the elements of an array of integers) until a certain execution time (AET) - which is described by the function usertask_actualexec - then go into standby mode until 'to their reactivation (usertask_sieepexec) when they reach their period.
Enfin, pour garder une structure du code la plus claire possible en gardant uniquement les fonctions essentielles de l'ordonnanceur dans le fichier N_Schedu!er.c, les fonctions secondaires nécessaires ont été regroupées elles dans un autre dans un fichier : N_utiis.c. Celui-ci contient par exemple des fonctions d'initialisation, de préemption de tâches, d'allocation de tâches aux processeurs, de tri, d'affichage, etc.  Finally, to keep the code structure as clear as possible by keeping only the essential functions of the scheduler in the file N_Schedu! Er.c, the necessary secondary functions have been grouped together in another in a file: N_utiis.c. This contains for example functions of initialization, preemption of tasks, allocation of tasks to the processors, sorting, display, etc.
Le répertoire pmjdrivers contient les fonctions plus bas niveau sur lequel s'appuie l'ordonnanceur. II s'agit plus précisément des fichiers prototypes intei_ core 5_m520.h, pm_typedef.h et du fichier pm_cpufreq.c.  The pmjdrivers directory contains the lowest level functions on which the scheduler relies. More specifically, these are the prototype files inte__ core 5_m520.h, pm_typedef.h and the file pm_cpufreq.c.
Le fichier prototype intel_core_i5_m520.h contient la description de la plateforme cible dont il porte le nom. Il s'agit des informations sur le nombre de processeurs de la platforme et sur les fréquences possibles pour chacun des cœurs, représentés sous forme d'états. Les types utilisés (e.g. les états processeur) sont définis dans le fichier pm_typedef.h,  The prototype file intel_core_i5_m520.h contains the description of the target platform whose name it bears. This is information about the number of processors in the platform and the possible frequencies for each of the cores, represented as states. The types used (e.g. the processor states) are defined in the file pm_typedef.h,
Le fichier pm_cpufreq.c contient en particulier les fonctions permettant le changement effectif dynamique de fréquence sur la plateforme d'exécution, en se basant sur ΓΑΡΙ Linux CPUfreq, Le changement dynamique de tension fréquence est géré par des politiques appelées governors (gouverneurs) sous Linux, connues en elles-mêmes. Plus précisément, pour pouvoir changer la (les) fréquence(s) processeur(s), il faut d'abord utiliser un gouverneur dit userspace. Une première fonction set_governor est présente pour cela. Ensuite, l'interaction avec le pilote Linux de changement de fréquence se fait via des fichiers d'échanges localisés dans /sys/devices/system/cpu/cpuX/cpufreq/ où X représente le numéro du processeur concerné. Les fonctions open_cpufreq et close_cpufreq permettent d'ouvrir et de fermer le fichier d'échange scaling_set$peed. Le changement de fréquence se fait par la fonction CPU_state_manager_apply_state. The pm_cpufreq.c file contains in particular the functions allowing the effective dynamic frequency change on the runtime platform, relying on Linux CPUfreq, The dynamic change of frequency voltage is managed by policies called governors (governors) under Linux, known in themselves. More precisely, to be able to change the frequency (s) processor (s), one must first use a governor said userspace. A first function set_governor is present for this. Then, the interaction with the Linux frequency change driver is done via exchange files located in / sys / devices / system / cpu / cpuX / cpufreq / where X represents the number of the concerned processor. The open_cpufreq and close_cpufreq functions allow you to open and close the scaling_set $ peed interchange file. The change of frequency is done by the function CPU_state_manager_apply_state.
Les différentes fonctions utilisent notamment les API POSIX suivantes :  The various functions use the following POSIX APIs:
API Pthreads : Pthreads API:
• pthread_creaie, pour la création d'un Pthread, • pthread_creaie, for the creation of a Pthread,
• pthread Join, pour synchroniser îa terminaison des Pthreads, • pthread Join, to synchronize the termination of Pthreads,
• pthread_exit, à la fin de l'exécution d'un Pthread, • pthread_exit, at the end of the execution of a Pthread,
• pthread_cancel, pour forcer îa terminaison d'un Pthread. • pthread_cancel, to force the termination of a Pthread.
- API Signal :  - Signal API:
• pthread_kill  • pthread_kill
' API Variables conditions  Variables API
• pthread_cond_wait  • pthread_cond_wait
• pthread cond broadcast  • pthread cond broadcast
" API Mutex  "Mutex API
• pthread mutexjock  • pthread mutexjock
• pthread_mutex_unlock  • pthread_mutex_unlock
Les différentes fonctions utilisent également les APis non POSIX suivantes :  The different functions also use the following non-POSIX APis:
« API CPU affinity » : " CPU Affinity API":
• CPU ZERO pour vider la liste des processeurs alloués à une tâche,  • CPU ZERO to empty the list of processors allocated to a task,
• CPU_SET pour ajouter un processeur à la liste des processeurs alloués, • CPU_CLR pour enlever un processeur à la liste des processeurs alloués, • CPU_SET to add a processor to the list of allocated processors, • CPU_CLR to remove a processor from the list of allocated CPUs,
* CPU_ISSET pour tester si une tâche est affectée à un processeur donné et  * CPU_ISSET to test if a task is assigned to a given processor and
· pthread_setaffinity_np pour fixer le masque d'affinité à une tâche.  · Pthread_setaffinity_np to set the affinity mask to a task.
API CPUfreq pour le changement dynamique de tension- fréquence. CPUfreq API for dynamic voltage-frequency change.
L'invention a été décrite en détail en référence à un mode de réalisation particulier : système multiprocesseur sous Linux, politique d'ordonnancement EDF avec changement dynamique de tension-fréquence (DVFS). Il ne s'agit pas, cependant, de limitations essentielles.  The invention has been described in detail with reference to a particular embodiment: multiprocessor system under Linux, EDF scheduling policy with dynamic voltage-frequency change (DVFS). These are not, however, essential limitations.
Ainsi, le procédé d'ordonnancement de l'invention peut être appliqué à une plateforme monoprocesseur et/ou ne pas mettre en œuvre de mécanismes de réduction de la consommation.  Thus, the scheduling method of the invention can be applied to a single-processor platform and / or not implement mechanisms for reducing consumption.
La mise en œuvre de l'invention est particulièrement aisée sous LINUX, car les API décrites précédemment sont disponibles. Cependant, un ordonnanceur selon l'invention peut être réalisé sous un autre système d'exploitation compatible avec la norme POSIX et plus précisément avec la norme IEEE POSIX 1 .c (ou, ce qui est équivalent, POSIX 1003.1c), sous réserve qu'il supporte un équivalent des APis CPU Affinity (pour les applications multiprocesseurs) et CPUfreq (pour les applications exploitant le changement dynamique de tension fréquence).  The implementation of the invention is particularly easy under LINUX because the APIs described above are available. However, a scheduler according to the invention can be implemented under another operating system compatible with the POSIX standard and more specifically with the IEEE POSIX 1c standard (or, which is equivalent, POSIX 1003.1c), provided that it supports an equivalent of APis CPU Affinity (for multiprocessor applications) and CPUfreq (for applications using dynamic frequency voltage change).
D'autres politiques d'ordonnancement avec contraintes d'échéance peuvent être mises en œuvre, comme par exemple des politiques « Rate Monotonie » (ordonnancement à taux monotone, RM), « Deadline Monotonie » (ordonnancement en fonction de l'échéance des processus, DM) ou « Latest Laxity First » (ordonnancement en fonction de la laxité plus faible, LLF).  Other scheduling policies with deadline constraints can be implemented, such as "Rate Monotonicity" (monotonic rate scheduling, RM), "Deadline Monotonic" (scheduling according to the maturity of the processes). , DM) or "Latest Laxity First" (LLF).
De même, diverses techniques de réduction de la consommation peuvent être mises en œuvre. On peut citer à titre d'exemples non limitatifs la technique DSF (« Deterministic Stretch~to~Fit », c'est-à-dire « extension et ajustement déterministe de la fréquence »), exploitant le changement dynamique de tension fréquence (il s'agit de la technique utilisée dans l'exemple qui vient d'être décrit) et AsDPM (« Assertive Dynamic Power Mangement » c'est à dire « politique assertive de gestion dynamique des modes repos ») exploitant les modes repos processeur. Similarly, various techniques for reducing consumption can be implemented. As non-limiting examples, DSF ("Deterministic Stretch ~ to ~ Fit", that is "deterministic extension and adjustment of frequency"), exploiting the dynamic change of frequency voltage (this is the technique used in the example just described) and AsDPM ("Assertive Dynamic Power Management" ie "assertive policy dynamic management of rest modes") exploiting processor idle modes.
il est à remarquer que la reproduction d'un ordonnanceur de tâches sous contraintes d'échéances en espace utilisateur telle que décrite ci- dessus n'est réalisable qu'à certaines conditions qui dépendent du système d'exploitation (dénommé en anglais OS pour Operating System). Ces conditions sont par exemple les suivantes. Le procédé selon l'invention doit supporter la notion d'ordonnancement préemptif en espace utilisateur. La possibilité de préemption explicite d'une tâche depuis l'espace utilisateur est une caractéristique qui n'est pas réalisable dans tous les OS. Le modèle de tâche doit intégrer des attributs de contrainte d'échéance et d'état. La possibilité d'extension du modèle de tâche avec ces attributs, accessibles en mode utilisateur dépend de l'OS.  It should be noted that the reproduction of a task scheduler under user space deadlines constraints as described above is only feasible under certain conditions that depend on the operating system (called OS for Operating System). System). These conditions are for example the following. The method according to the invention must support the notion of preemptive scheduling in user space. The explicit preemption of a task from user space is a feature that is not feasible in all OSs. The task template must include maturity and state constraint attributes. The ability to extend the task template with these attributes, accessible in user mode, depends on the OS.
Il est à remarquer également que la réalisation de politiques d'ordonnancement orientées faible consommation en espace utilisateur n'est réalisable qu'à certaines conditions qui dépendent de l'OS. La possibilité de mise en sommeil ou de changement dynamique de fréquence d'un processeur en espace utilisateur n'est pas réalisable dans tous les systèmes d'exploitation, notamment sous Windows.  It should also be noted that the realization of low consumption consumption oriented scheduling policies in user space is only feasible under certain conditions that depend on the OS. The ability to sleep or dynamically change frequency of a processor in user space is not feasible in all operating systems, especially Windows.
Il est à noter également que le procédé selon l'invention ne nécessite aucune intervention explicite du mode superviseur et se base exclusivement sur des mécanismes en mode utilisateur sous forme d'APis.  It should also be noted that the method according to the invention does not require any explicit intervention of the supervisor mode and is based exclusively on user mode mechanisms in the form of APis.
La structure de tâche standard POS!X Linux (pthread), décrite dans le mode de réalisation particulier de la Figure 1 , peut être généralisée à une structure de tâche standard POSIX (thread) sous un OS qui permet de la réaliser. La mise en œuvre de l'invention pour cette structure de tâche généralisée nécessite à l'instar de la structure de tâche standard POSIX Linux (pthread) de la Figure 1 leur extension par des paramètres accessibles en espace utilisateur. De même, à l'instar du modèle applicatif décrit pour la Figure 1 , les informations concernant les caractéristiques et contraintes temporelles des tâches sont rajoutées dans une structure spécifique qui étend le type de tâche standard POSIX par des paramètres rendus accessibles en espace utilisateur. The standard POS! X Linux (pthread) task structure, described in the particular embodiment of Figure 1, can be generalized to a standard POSIX (thread) task structure under an OS that enables it to be realized. The implementation of the invention for this generalized task structure requires, like the POSIX Linux standard task structure (pthread) of Figure 1, their extension by parameters accessible in user space. Similarly, like the application model described for Figure 1, the information concerning the temporal characteristics and constraints of the tasks are added in a specific structure that extends the type of POSIX standard task by parameters made accessible in user space.
Il est à remarquer que le procédé de préemption de tâches POSIX en espace utilisateur mis en oeuvre dans l'invention n'existe pas dans les OS dits « grand public », c'est-à-dire dépourvus de contraintes temps réel. Selon l'invention et de manière générale, pour contrôler l'exécution des tâches applicatives, l'ordonnanceur nécessite de réaliser une fonctionnalité de préemption explicite de tâches POSIX en espace utilisateur, qui se base sur l'utilisation de deux fonctions spécifiques pour la suspension et la reprise d'une tâche. It should be noted that the method of preemption of POSIX tasks in user space implemented in the invention does not exist in so-called "general public" OS, that is to say devoid of real-time constraints. According to the invention and in general, to control the execution of the application tasks, the scheduler requires an explicit preemption function for POSIX tasks in user space, which is based on the use of two specific functions for the suspension. and the resumption of a task.
ANNEXE : CODE SOURCE APPENDIX: SOURCE CODE
1. DSF_Scheduler 1. DSF_Scheduler
N scheduler.h N scheduler.h
#ifndef N_SCHEDULER #ifndef N_SCHEDULER
#define N^SCHEDULER #define N ^ SCHEDULER
#include <stdbool.h> #include <stdbool.h>
#include <unistd.h> #include <unistd.h>
#snclude <sys/types.h> #snclude <sys / types.h>
#include <sys/time.h> #include <sys / time.h>
#inciude <pthread.h> #inciude <pthread.h>
#include <semaphore.h> // PRIORITE MAXIMALE POUR LA POLITIQUE SCHED_FIFO #include <semaphore.h> // MAXIMUM PRIORITY FOR SCHED_FIFO POLICY
// UTILISEE POUR DONNER UNE PRIORITE MAXIMALE A L'ORDONNANCEUR  // USED TO GIVE MAXIMUM PRIORITY TO THE ORDERER
#define MAX_PR!0 99 #define MAX_PR! 0 99
// NOMBRE DE TACHES DE L'APPLICATION // NUMBER OF TASKS OF THE APPLICATION
#define NU _THREADS 4 #define NU _THREADS 4
// NOMBRE DE PROCESSEURS UTILISES PAR L'ORDONNANCEUR // NUMBER OF PROCESSORS USED BY THE ORDERER
// (PEUT ETRE INFERIEUR OU EGAL AU NOMBRE DE PROCESSEUR TOTAL DISPONIBLE) #define CPU size 2  // (MAY BE LESS THAN OR EQUAL TO THE NUMBER OF TOTAL PROCESSOR AVAILABLE) #define CPU size 2
// DEFINITION DU TYPE ENUMERE STATE_T: ETAT D'UNE TACHE // DEFINITION OF THE TYPE LISTED STATE_T: STATUS OF A TASK
enum state_t { unexisting, running, waiting, ready };  enum state_t {unexisting, running, waiting, ready};
// unexisting = inexistante; running = en exécution; waiting = en attente; ready = prête // DEFINITION DU TYPE TASK (EXTENSION DU TYPE PTHREAD) // unexisting = nonexistent; running = in execution; waiting = waiting; ready = ready // TYPE TASK DEFINITION (PTHREAD TYPE EXTENSION)
typedef struct task  typedef struct task
{  {
short id; // identifiant de tâche utilisée seulement pour le débogage ftoat wcet; // temps d'exécution au pire cas  short id; // job ID used only for ftoat wcet debugging; // worst case execution time
fioat bcet; // temps d'exécution au meilleur cas  fioat bcet; // execution time at the best case
fioat aet; // temps d'exécution effectif (wcet < aet < bcet) f ioat ret; // RET :temps d'exécuton, restant  fioat aet; // effective execution time (wcet <aet <bcet) fioat ret; // RET: execution time, remaining
fioat deadline; // échéance ("deadline") absolue float next_deadiine; // échéance ("deadiine") relative fioat deadline; // absolute deadline float next_deadiine; // relative (deadiine)
unsigned int cpu; // processeur exécutant cette tâche unsigned int cpu; // processor running this task
float period; // période de la tâche = échéance float period; // task period = deadline
float nexLperiod; // période relative float nexLperiod; // relative period
double begintime; // temps de début du travail double begintime; // start time of work
double endtime; // temps de fin du travail double endtime; // end of work time
int préemption; // 1 : préemptée; 0: non préemptée. int preemption; // 1: preempted; 0: not preempted.
double preempidelay; // durée de la dernière préemption double preempidelay; // duration of the last preemption
double last_preemp _time; // temps auquel s'est produite la dernière préemption double total_preempt_duration; // durée de préemption totale si le travail est préempté double last_preemp _time; // time the last double preemption total_preempt_duration occurred; // total pre-emption time if the job is pre-empted
// plusieurs fois - peut être calcutée à partir de preemptdelay // et last_preempttime // many times - can be calculated from preemptdelay // and last_preempt " time
enum statej state; // état de la tâche: waiting, ready, running, unexisting pthread_mutex_t mut_wait; // Mutex utilisé pour suspendre et reprendre une tâche pthread_cond_t cond_resume; / Variable condition utilisée pour suspendre et reprendre enum statej state; // task status: waiting, ready, running, unexisting pthread_mutex_t mut_wait; // Mutex used to suspend and resume a task pthread_cond_t cond_resume; / Variable condition used to suspend and resume
// une tâche  // a spot
pthread__t *pthread; // thread posix associé à une tâche pthread__t * pthread; // posix thread associated with a task
pid J thread_pid; // identifiant linux {process ID, ou pid) du thread posix associé pid J thread_pid; // linux id {process ID, or pid) of the associated posix thread
// à la tâche  // to the job
}task;  } Task;
// DEFINITION DU TYPE ENUMERE STATE_P: ETAT D'UN PROCESSEUR // DEFINITION OF TYPE LISTED STATE_P: STATUS OF A PROCESSOR
enum state_p { RUNNING,STAND_BY,IDLE,SLEEP, DEEP_SLEEP }; enum state_p {RUNNING, STAND_BY, IDLE, SLEEP, DEEP_SLEEP};
// DEFINITION DU TYPE CPU (état courant, numéro du CPU) // DEFINITION OF CPU TYPE (current state, CPU number)
typedef struct cpu typedef struct cpu
{  {
enum state__p state;  enum state__p state;
int cpu Jd;  int cpu Jd;
}cpu; Cpu};
// DECLARATION DES VARIABLES GLOBALES DE l'ORDONNANCEUR // DECLARATION OF GLOBAL VARIABLES OF THE ORDERER
// LISTE DES PROCESSEURS UTILISES POUR LORDONNANCMENT // LIST OF PROCESSORS USED FOR LORDONNANCMENT
cpu CPUs[CPU_size]; cpu CPUs [CPU_size];
// LISTE DES TACHES APPLICATIVES A ORDONNANCER  // LIST OF APPLICATION TASKS TO BE ORDERED
task *T[NUM_THREADS]; task * T [NUM_THREADS];
// LISTE DES TACHES PRETES // LIST OF TASKS READY
task *list_ready[NUM_THREADS]; task * list_ready [NUM_THREADS];
int list_ready_size; // LISTE DES TACHES NON PRETES int list_ready_size; // LIST OF UNLATCHED TASKS
task *lisi_noready[NU _THREADS]; task * lisi_noready [NU _THREADS];
int list_noready_size; int list_noready_size;
struct timevai start_time, current_time; struct timevai start_time, current_time;
int rdv; //Pour synchroniser les threads à leur création  int rdv; // to synchronize threads to their creation
bool OTE; OTE bool;
// PARAMETRES POUR LE CALCUL DU FACTEUR DE RALENTISSEMENT DANS LA FONCTION //SLOW DOWN // PARAMETERS FOR CALCULATING THE SLOW DOWN FACTOR IN THE // SLOW DOWN FUNCTION
float wcei_Fn_1 [NUM_THREADS]; // WCET avant le calcul d'une nouvelle fréquence float aet_Fn_1 [NUM_THREADS]; // AET avant le calcul d'une nouvelle fréquence float abs_eet[NUM_THREADSJ; // temps d'exécution (absolu) de la tâche depuis float wcei_Fn_1 [NUM_THREADS]; // WCET before calculating a new float frequency aet_Fn_1 [NUM_THREADS]; // AET before calculating a new float frequency abs_eet [NUM_THREADSJ; // execution time (absolute) of the task since
// l'instant d'activation  // the moment of activation
float total_eet [N U M__TH READS] ; // temps total écoulé d'exécution de la tâche float total_eet [N U M__TH READS]; // total elapsed time of task execution
// (considéré à Fmax) depuis l'activation de la tâche {requis // en cas de préemption)  // (considered at Fmax) since the activation of the task {required // in case of preemption)
float total_abs_eet[NU _THREADSÎ; J; // temps total (absolu) écoulé d'exécution float total_abs_eet [NU _THREADSÎ; J; // total time (absolute) elapsed execution
// de la tâche depuis son activation  // the task since its activation
double task_durationJNU _THREADS]; // durée (absolue) de la tâche depuis l'instant double task_durationJNU _THREADS]; // duration (absolute) of the task since the moment
// d'activation  // activation
float last__resume_time[NUM_THREADS]; // Dernier temps auquel la tâche a été reprise (à la float last__resume_time [NUM_THREADS]; // Last time the task was resumed (at the
// suite d'une préemption). Ce temps est remis à zéro // à chaque période de la tâche  // following a preemption). This time is reset at each task period
float siackJocal[CPUsize]; // "slack": temps restant dû au fait que la tâche float siackJocal [CPU " size]; // "slack": remaining time due to the fact that the task
// précédente est terminée avant son WCET float t_available_Fn_1 [CPU_size]; // slack avant le calcul d'une nouvelle fréquence float estimated_WCET_Fn_1 [CPU_size];// WCET estimé considérant le slack précédent et  // previous is finished before WCET float t_available_Fn_1 [CPU_size]; // slack before calculating a new float frequency estimated_WCET_Fn_1 [CPU_size]; // WCET estimated considering the previous slack and
// avant le calcul d'une nouvelle fréquence float estimated_AET_Fn_1 [CPU_sizej; // AET estimé considérant le slack précédent et  // before calculating a new float frequency estimated_AET_Fn_1 [CPU_sizej; // AET estimated considering the previous slack and
// avant le calcul d'une nouvelle fréquence float SRJocal[CPU_size]; // Facteur de ralentissement local (par rapport à la  // before calculating a new float frequency SRJocal [CPU_size]; // local slowdown factor (relative to the
// fréquence précédente)  // previous frequency)
float SRJotal[CPU_size]; // Facteur de ralentissement total (par rapport à la float SRJotal [CPU_size]; // Total deceleration factor (relative to the
// fréquence maximale)  // maximum frequency)
double Fn_1 ; // fréquence précédente  double Fn_1; // previous frequency
double Fn; // fréquence actuelle double Fn; // current frequency
float offset_delaylNUM_THREADSI; // Temps additionnel à prendre en compte pour float offset_delaylNUM_THREADSI; // Additional time to consider for
// l'exécution de la tâche, dû à la possibilité d'offset // (étant donnée une tâche qui normalement // commence à T0, l'offset permet de démarrer la // tâche à T0 + offset).  // the execution of the task, due to the possibility of offset // (given a task which normally starts at T0, the offset starts the // task at T0 + offset).
exec|NUM_THREADSJ; // PARAMETRES POUR LA GESTION DU TEMPS exec | NUM_THREADSJ; // PARAMETERS FOR TIME MANAGEMENT
double schedjsme;  double schedjsme;
double begin__sched_time;  double begin__sched_time;
double end_sched_time;  double end_sched_time;
double sched_duration[150];  double sched_duration [150];
short nb_sched_call;  short nb_sched_call;
double simulationjîme;  double simulation;
// UTEX POUR EMPECHER L'EXECUTION PARALELLE DE 2 INSTANCES L'ORDONNACEUR // UTEX TO PREVENT THE PARALLEL EXECUTION OF 2 INSTANCES THE ORDERER
pthread_mutex_t sched_mut_waît;  pthread_mutex_t sched_mut_write;
pthread_co nd_t sched_con d_res urne;  pthread_co nd_t sched_con d_res urne;
// VARIABLES POUR CALCULER LE NOMBRE DE TRANSITIONS D'ETATS PROCESSEURS int STAND_BY_to_RUNNING; // VARIABLES TO CALCULATE THE NUMBER OF PROCESSOR STATE TRANSITIONS int STAND_BY_to_RUNNING;
int DEEP_SLEEP_to_RUNNING;  int DEEP_SLEEP_to_RUNNING;
int SLEEP_to_RUNNiNG;  int SLEEP_to_RUNNiNG;
int !DLE_to_RUNNING;  int! DLE_to_RUNNING;
int Preemption_counter;  int Preemption_counter;
// PROTOTYPES DES FONCTIONS EXPORTEES // PROTOTYPES OF EXPORTED FUNCTIONS
void start_sched{);  void start_sched {);
void slow_down(task *T, cpu *P); void o n Activa te (tas k *T);  void slow_down (task * T, cpu * P); void o n Activa te (heap k * T);
void onUnBiock(task *T);  void onUnBiock (task * T);
void onBlock{task *T);  void onBlock {task * T);
void onTerminateftask *T);  void onTerminateftask * T);
#endif #endif
DSF__Scheduler, c DSF__Scheduler, c
#define _GNU_SOURCE ffinclude <stdlib.h> #inc1ude <sidio.h> #define _GNU_SOURCE ffinclude <stdlib.h> # inc1ude <sidio.h>
#include <signal.h> #include <signal.h>
#include <sîring.h> #include <sîring.h>
#tnclude <math.h> #tnclude <math.h>
#include <sched.h> #include <sched.h>
#include <semaphore.h>  #include <semaphore.h>
#include <sys/syscall.h> #include "N_utils.h" #include <sys / syscall.h> #include "N_utils.h"
#include "../p m_dn vers/i n te l_co re j 5_m 520, h "  #include "../p m_dn to / i n te l_co re j 5_m 520, h"
//#define DEBUG // # define DEBUG
// FONCTION PRINCIPALE DE L'ORDONNANCEUR, APPELEE DEPUIS "CALL^SCHEDULER" // LE COEUR DU TRAITEMENT DE L'ORDONNANCEUR SE TROUVE DANS CETTE FONCTION void *select_() { // THE PRINCIPAL FUNCTION OF THE ORDERER, CALLED FROM "CALL ^ SCHEDULER" // THE HEART OF THE TREATMENT OF THE ORDERER IS IN THIS FUNCTION void * select_ () {
int i,j,rc;  int i, j, rc;
double r__nxt;  double r__nxt;
schedjime = get_time(); schedjime = get_time ();
OTE == false; OTE == false;
// DEFINITION DE LA LISTE DES TACHES PRETES (LIST_READY) // DEFINITION OF THE LIST OF TASKS READY (LIST_READY)
// DEFINITION DE LA LISTE DES TACHES NON-PRETES (LiST_NO_READY) iist_ready_size = 0;  // DEFINITION OF THE LIST OF NON-READY TASKS (LiST_NO_READY) iist_ready_size = 0;
Hst_noready_size - 0;  Hst_noready_size - 0;
for(i=0; i<NUM_THREADS; {  for (i = 0; i <NUM_THREADS; {
if((T[i]->state == ready) H {T[i]->state ~ running)) {  if ((T [i] -> state => ready) H {T [i] -> state ~ running)) {
list__ready[list_ready_size] - T[i];  list__ready [list_ready_size] - T [i];
list_ready_size++;  list_ready_size ++;
}  }
else {  else {
listnoready[iist_noready__sizeJ = Tji]; list " noready [iist_noready__sizeJ = Tji];
list_noready_size++;  list_noready_size ++;
}  }
> // TRI DES LISTES DE TACHES PAR ORDRE D'ECHEANCE CROISSANTE (LA PLUS PROCHE EN PREMIER) > // SORTING THE TASKS LISTS BY GROWING ORDER (NEAREST FIRST FIRST)
so rt( I i st_re ad y , I i sf_ready_s ize) ;  so rt (I i st_re ad y, I i sf_ready_s ize);
sort(list_noready, list_noready_size);  sort (list_noready, list_noready_size);
printf("\n*************Sched ροίη¾:%.4Γ**************\η",8θΙΐΘά_ίίΓηβ); printf ("\ n * * ******* **** Sched ροίη¾:%. 4Γ * * ** ******** ** \ η", 8θΙΐΘά_ίίΓηβ);
// MESSAGES DE DEBUG QUI AFFICHE LA LISTE DES TACHES PRETES  // DEBUG MESSAGES WHICH DISPLAYS THE LIST OF TASKS READY
prin f("Sor ed eady List:\ ");  prin f ("Sor ed eady List: \");
Display_list(list_ready, list_ready_size);  Display_list (list_ready, list_ready_size);
// MESSAGES DE DEBUG QUI AFFICHE LA LISTE DES TACHES NON PRETES // DEBUG MESSAGES WHICH DISPLAYS THE LIST OF UNLEATED TASKS
#ifdef DEBUG  #ifdef DEBUG
printf("Sorted No_Ready ListAt");  printf ("Sorted No_Ready ListAt");
DisplayJist(list_noready, list_noready_size);  DisplayJist (list_noready, list_noready_size);
#endif  #endif
// PREEMPTION DES TACHES NON PRIORITAIRES DE LA LISTE DES TACHES PRETES ET EN //COURS D'EXECUTION // PREEMPTION OF NON-PRIORITY TASKS OF THE LIST OF TASKS READY AND IN // COURSE OF EXECUTION
for (i=CPU_size; i<list_ready_size; {  for (i = CPU_size; i <list_ready_size; {
task *job = list_readyEi];  task * job = list_readyEi];
if (job->state == running) {  if (job-> state == running) {
abs_eet[job->id-1j = schedjime - iast_resume_time[job->id-1]; total_abs_eet|job->id-1] = total_abs_eetrjob->id-1 ] + abs_eet[job->(d-1];  abs_eet [job-> id-1j = schedjime - iast_resume_time [job-> id-1]; total_abs_eet | job-> id-1] = total_abs_eetrjob-> id-1] + abs_eet [job -> (d-1];
total_eet[job->id-1] = total_eetjjob->id-1 ] + (abs_eet[job->id-1 ] / SRjotalfjob- total_eet [job-> id-1] = total_eetjjob-> id-1] + (abs_eet [job-> id-1] / SRjotalfjob-
>cpu]); > Cpu]);
if {!{sched_time < job->begintime + job->offset))  if {! {sched_time <job-> begintime + job-> offset))
T_preempt(job);  T_preempt (job);
}  }
}  }
// MISE A JOUR DU "RET" DE CHAQUE TACHE (REMAINING EXECUTION TIME) // UPDATE OF THE "RET" OF EACH TASK (REMAINING EXECUTION TIME)
for (i=0; i<NUM_THREADS; i++) {  for (i = 0; i <NUM_THREADS; i ++) {
task *job = T[i];  task * job = T [i];
if Gob->preemption == 1 ) {  if Gob-> preemption == 1) {
job->preemptdelay = sched_time - job->last__preempt_time;  job-> preemptdelay = sched_time - job-> last__preempt_time;
job->totai_j)reempt_duration ~ job->total_preempt_duraiion + job- >preem tdelay;  job-> totai_j) reempt_duration ~ job-> total_preempt_duraiion + job-> preem tdelay;
}  }
if (job->state == running) job->rei = job->aet - total__eet[job->id-1 ]; if (job-> state == running) job-> rei = job-> aet - total__eet [job->id-1];
else if{job->state == ready) {  else if {job-> state == ready) {
if (job->preemption == 1 )  if (job-> preemption == 1)
job->ret = G°b->aet - totai_eet[job->id-1]);  job-> ret = G ° b-> aet - totai_eet [job-> id-1]);
else  else
job->ret = job->aet;  job-> ret = job-> aet;
}  }
else if(job->state == waiting)  else if (job-> state == waiting)
job->ret = 0.0;  job-> ret = 0.0;
}  }
// Le mutex suivant permet d'éviter d'exécuter plusieurs instances de l'ordonnanceur // The following mutex avoids running multiple instances of the scheduler
// simultannément, ce qui conduirait à des accès concurrents et conflictuels sur certaines variables // simultaneously, which would lead to competing and conflicting access to certain variables
// partagées // shared
pthread_mutex_lock{&sched jnut_wait);  pthread_mutex_lock {& sched jnut_wait);
// ALLOCATION DES TACHES PRETES AUX PROCESSEURS LIBRES  // ALLOCATION OF TASKS READY TO FREE PROCESSORS
for (i=0; (i<CPU_size) && (i<list_ready_size); i++) {  for (i = 0; (i <CPU_size) && (i <list_ready_size); i ++) {
task *]ob = list_ready[i];  task *] ob = list_ready [i];
if (job->state != running) {  if (job-> state! = running) {
cpu *P = NULL;  cpu * P = NULL;
for (j = 0; j<CPU_size; j++) {  for (j = 0; j <CPU_size; j ++) {
P = &CPUs[jJ;  P = & CPUs [jJ;
if (PjsRunning(P->cpuJd) == 0) {  if (PjsRunning (P-> cpuJd) == 0) {
if(P->state == DEEP_SLEEP) DEEP^SLEEPJo_RUNNING++; if (P-> state == DEEP_SLEEP) DEEP ^ SLEEPJo_ " RUNNING ++;
if(P->state == IDLE) !DLE_to_RUNNING++;  if (P-> state == IDLE)! DLE_to_RUNNING ++;
if(P->staie == STAND_BY) STAND_BY to RUNNING++;  if (P-> staie == STAND_BY) STAND_BY to RUNNING ++;
if(P->state == SLEEP) SLEEPJoJ¾UNNING++;  if (P-> state == SLEEP) SLEEPJoJ¾UNNING ++;
// CALCUL DU RALENTISSEMENT (FREQUENCE MINIMUM) DU PROCESSEUR ALLOUE  // CALCULATION OF SLOW DOWN (MINIMUM FREQUENCY) OF PROCESSOR ALLOCATED
if(nb_exec[P->cpujd] != 0) {  if (nb_exec [P-> cpujd]! = 0) {
siow_down(job, P);  siow_down (job, P);
OTE = faîse;  OTE = peak;
>  >
etse  and this
{  {
SR_local[P->cpu_id] = 1 .0;  SR_local [P-> cpu_id] = 1 .0;
SR_total[P->cpu_id3 - 1 .0;  SR_total [P-> cpu_id3 - 1 .0;
}  }
pthread_mutex_lock(&job->mut_wait); // sans ce, mutex,  pthread_mutex_lock (& job-> mut_wait); // without this, mutex,
//parfois un travail serait recommence avant d'avoir été assigné à un processeur. // sometimes a job would be started again before being assigned to a processor.
T_runningOn(job, P->cpu_id); pthread_mutex_unlock(&job->mut_wait); T_runningOn (job, P->cpu_id); pthread_mutex_unlock (&job->mut_wait);
#ifdef DËBUG  #ifdef DËBUG
printf{'T%d is assigned to CPU%d\n",job~>id,P->cpu_id);  printf {'T% d is assigned to CPU% d \ n', job ~> id, P-> cpu_id);
#endif  #endif
if(job->preemption == 1 ) {  if (job-> preemption == 1) {
job->preemption = 0;  job-> preemption = 0;
iast__resume_time[job->id-1 ] - sched_time;  iast__resume_time [job-> id-1] - sched_time;
}  }
eise if{job->preemption == 0) {  eise if {job-> preemption == 0) {
job->begintime = sched_time;  job-> begintime = sched_time;
job->last_preempt_time = job->begintime;  job-> last_preempt_time = job-> begintime;
last_resume_time[job->id-1 ] = job->begintime;  last_resume_time [job-> id-1] = job-> begintime;
}  }
// DEMARRAGE DE LA TACHE PRETE SUR LE PROCESSEUR ALLOUE pthread_mutexJock(¾ob->mut_wait);  // START THE TASK READY ON THE PROCESSOR ALLOWED pthread_mutexJock (¾ob-> mut_wait);
pthread_cond_broadcast(&job->cond_resume);  pthread_cond_broadcast (& job-> cond_resume);
pthread_mutex_unlock(&job->mut_wait);  pthread_mutex_unlock (& job-> mut_wait);
break;  break;
}  }
}  }
}  }
}  }
pthread_m utex_u n lock(&sched_m u t_wa it); int nb = 0; pthread_m utex_u n lock (& sched_m u t_wa it); int nb = 0;
// MESSAGES DE DEBUG QU! AFFICHENT L'ETAT POUR CHAQUE PROCESSEUR (ACTIVITE // ET/OU TACHE ALLOUEE)  // DEBUG MESSAGES QU! POST STATE FOR EACH PROCESSOR (ACTIVITY // AND / OR ALLOCATED TASK)
#ifdef DEBUG #ifdef DEBUG
printf("Processors StaieAn"); printf ("StaieAn Processors");
#endif #endif
for 0=0; i<CPU_size; i++) { for 0 = 0; i <CPU_size; i ++) {
cpu *P = &CPUs[i];  cpu * P = & CPUs [i];
#ifdef DEBUG  #ifdef DEBUG
printf("CPU%d -> " ,P->cpujd);  printf ("CPU% d ->", P-> cpujd);
if (P->state == RUNNING) {  if (P-> state == RUNNING) {
printf{"RUNN!NG ");  printf {"RUNN! NG");
for (j=0; j<NU _THREADS; j++)  for (j = 0; j <NU _THREADS; j ++)
if ( (T[j]->state == running) && (T ]->cpu == P->cpujd) ) printf("T%d ", T[j]->id); printf("\n");  if ((T [j] -> state == running) && (T] -> cpu == P-> cpujd)) printf ("T% d", T [j] -> id); printf ( "\ n");
} eise if (P->state STANDBY) phntffSTAND BY\n"); } if (P-> state STAND " BY) phntffSTAND BY \ n");
eise if (P->state IDLE) printf("IDLE\n");  eise if (P-> state IDLE) printf ("IDLE \ n");
eise if (P->state SLEEP) prinif("SLEEP\n");  eise if (P-> state SLEEP) prinif ("SLEEP \ n");
else if (P->sta†e DEEPJ3LEEP) printf("DEEP SLEEPVn");  else if (P-> sta † e DEEPJ3LEEP) printf ("DEEP SLEEPVn");
else prinîf("UNDEFÎNED\n");  else prinîf ("UNDEFINED \ n");
#endif  #endif
if (PjsRunning(P->cpu_id) == 1 ) {  if (PjsRunning (P-> cpu_id) == 1) {
nb++;  nb ++;
}  }
else {  else {
slackJocai[P->cpu_id] = 0;  slackJocai [P-> cpu_id] = 0;
#ifdef DEBUG  #ifdef DEBUG
prinif("-> slack_local[CPU%d] = 0\n", P->cpu_id);  prinif ("-> slack_local [CPU% d] = 0 \ n", P-> cpu_id);
#endif  #endif
}  }
}  }
#ifdef DEBUG  #ifdef DEBUG
printf("Total Préemptions = %d\n" reemption_counter);  printf ("Total Preemptions =% d \ n" reemption_counter);
#endif  #endif
>  >
// FONCTION D'APPEL DE L'ORDONNANCEUR // APPEAL FUNCTION OF THE ORDEROR
// CREE UN THREAD AVEC LA FONCTION SELECT_ PRECEDENTE  // CREATE A THREAD WITH THE PREVIOUS SELECT_ FUNCTION
void call_scheduler{) { void call_scheduler {) {
int i, rc;  int i, rc;
pid__t pid;  pid__t pid;
pthread__t schedjhread;  pthread__t schedjhread;
begin__sched_time = get_tïme();  begin__sched_time = get_time ();
/* Création du pthread de l'ordonnanceur */  / * Creation of the pthread of the scheduler * /
if(pthread_create{&schedjhread, NULL, seiect , "4") < 0) { if (pthread_create {& schedjhread, NULL, seiect " ," 4 ") <0) {
printf("pihread_create: error creating sched thread\n");  printf ("pihread_create: error setting sched thread \ n");
exit{1 );  exit {1);
}  }
end__sched_time = get_time();  end__sched_time = get_time ();
// LE TEMPS D'EXECUTION DE L'ORDONNANCEUR EST MESURE POUR ANALYSE DES TEMPS  // TIME OF EXECUTION OF THE ORDERER IS MEASURE FOR TIME ANALYSIS
// DE REPONSE  // ANSWER
sched_duration[nb_sched_call++J = end_sched_time - begin__sched_time;  sched_duration [nb_sched_call ++ J = end_sched_time - begin__sched_time;
ffifdef DEBUG printffSCHEDULER DURATION #%d %f\n", nb_sched_call, sched___duration[nb_sched__ca!i - 1 ]); ffifdef DEBUG printffSCHEDULATE DURATION #% d% f \ n ", nb_sched_call, sched ___ duration [nb_sched__ca! i - 1]);
#endif  #endif
}  }
// LES FONCTIONS SUIVANTES CORRESPONDENT AUX EVENEMENTS DE TACHES: // THE FOLLOWING FUNCTIONS CORRESPOND TO THE EVENTS OF TASKS:
// - ONACTIVATE // - ONACTIVATE
// - ONBLOCK // - ONBLOCK
// - ONUNBLOCK // - ONUNBLOCK
// - ONTERMINATE // - ONTERMINATE
// ONACTIVATE // ONACTIVATE
// CET EVENEMENT INTERVIENT A LA CREATION D'UNE TACHE  // THIS EVENT INTERVENES WITH THE CREATION OF A TASK
void onActivate(task *task) { void onActivate (task * task) {
double randomvalue;  double randomvalue;
schedjime = 0.;  schedjime = 0 .;
task->next__period = schedjime + task->period;  task-> next__period = schedjime + task-> period;
task->next_deadline = task->deadline;  task-> next_deadline = task-> deadline;
// LE TEMPS D'EXECUTION (AET) EST TiRE AU HASARD ENTRE BCET ET WCET  // THE TIME OF EXECUTION (AET) IS TIRE AT RANDOM BETWEEN BCET AND WCET
randomvalue = {doub!e)rand()/RAND_MAX;  randomvalue = {doub! e) rand () / RAND_MAX;
task->aet = ( randomvalue * (task->wcei-task->bcet)+task->bcet );  task-> aet = (randomvalue * (task-> wcei-task-> bcet) + task-> bcet);
// INITIALISATION DE TOUTES LES VARIABLES NECESSAIRE A L'ORDONNANCEUR  // INITIALIZATION OF ALL VARIABLES NECESSARY TO THE ORDERER
// (DECLAREES/DETAILLEES DANS N_SCHEDULER,H) // (DECLARED / DETAILED IN N_SCHEDULER, H)
aet_Fn_1 [task->id-1 ] ~ task->aet;  aet_Fn_1 [task-> id-1] ~ task-> aet;
abs_eet[task->id-1 ] = 0.;  abs_eet [task-> id-1] = 0 .;
task->ret ~ task->aet;  task-> ret ~ task-> aet;
task->preemptde!ay = 0.;  task-> preemptde! ay = 0 .;
task->state = ready;  task-> state = ready;
printf("T%d\t %.4ftt\t %.4f\I %.4f\t %.4f\n",task->id,task->wcei,iask->deadline,task- printf ("T% d \ t% .4ftt \ t% .4f \ I% .4f \ t% .4f \ n", task-> id, task-> wcei, iask-> deadline, task-
>period,task->offset); > Period, task-> offset);
// SYNCHRONISATION PAR RDV: UNE FOIS CREEE, LA TACHE INCREMENTE RDV PUIS // ATTEND LE SiGNAL D'ACTIVATION task->cond_resume  // SYNCHRONIZATION BY APPOINTMENT: ONCE CREATED, THE INCREASED TASK RDV THEN // WAIT FOR THE ACTIVATION SIGNAL task-> cond_resume
// CE SIGNAL EST ENVOYE LORSQUE TOUTES LES TACHES SONT CREEES (LE. RDV = // NB_THREADS)  // THIS SIGNAL IS SENT WHEN ALL TASKS ARE CREATED (RDV = // NB_THREADS)
rdv++;  appointment ++;
pthread_∞nd_wait(&task->cond_resume, &task->mut_wait); task->begintime = sched_time;  pthread_∞nd_wait (& task-> cond_resume, & task-> mut_wait); task-> begintime = sched_time;
task->last_preempt_time = task->begintime;  task-> last_preempt_time = task-> begintime;
last_resume_time[task->id-1] = task->begintime; } last_resume_time [task-> id-1] = task->begintime; }
// ONBLOCK // ONBLOCK
// CET EVENEMENT INTERVIENT A LA FIN DE L'EXECUTION D'UNE TACHE  // THIS EVENT INTERVENES AT THE END OF THE EXECUTION OF A TASK
void onBlock(task *task) { sched_time = get_time(); void onBlock (task * task) {sched_time = get_time ();
task->endtime = sched_time; task->state = waiting;  task-> endtime = sched_time; task-> state = waiting;
slack_local[task->cpu] - task->wcet - task->aeî;  slack_local [task-> cpu] - task-> wcet - task-> aeî;
#ifdef DEBUG  #ifdef DEBUG
printf("T%d is terminated at iime = %.4f and generated slackJoca![CPU%d] = %.4f\n", task->id,task->endiime,task->cpu, slack_locai[iask->cpu]);  printf ("T% d is terminated at iime =% .4f and generated slackJoca! [CPU% d] =% .4f \ n", task-> id, task-> endiime, task-> cpu, slack_locai [iask- > cpu]);
#endif  #endif
CPUs[iask->cpu].state = IDLE;  CPUs [iask-> cpu] .state = IDLE;
task->cpu ~ -1 ;  task-> cpu ~ -1;
#ifdef DEBUG  #ifdef DEBUG
printf("T%d FINISHES EXECUTION AT %.2ftn^iask->id,schedjime);  printf ("T% d FINISHES EXECUTION AT% .2ftn ^ iask-> id, schedjime);
#endif  #endif
// SCHEDULSNG POINT: APPEL DE L'ORDONNANCEUR // SCHEDULSNG POINT: CALL FOR THE ORDERER
ca!l_scheduier{); }  ! Ca l_scheduier {); }
// ONUNBLOC // ONUNBLOC
// CET EVENEMENT INTERVIENT A LA FIN DE LA PERIODE D'UNE TACHE (REACTIVATION) void onUnBlock(task *task) {  // THIS EVENT INTERVENES AT THE END OF THE PERIOD OF A TASK (REACTIVATION) void onUnBlock (task * task) {
double randomvalue; sched_time = get_time();  double randomvalue; sched_time = get_time ();
// LE TEMPS D'EXECUTION (AET) EST TIRE AU HASARD ENTRE BCET ET WCET randomvalue = (double)rand()/RAND_MAX; // TIME OF EXECUTION (AET) IS RUNNING BETWEEN BCET AND WCET randomvalue = (double) rand () / RAND_MAX;
task->aet = ( randomvalue * {task->wcet-task->bcet)+task->bcet );  task-> aet = (randomvalue * {task-> wcet-task-> bcet) + task-> bcet);
// REINITIALISATION DE TOUTES LES VARIABLES NECESSAIRE A L'ORDONNANCEUR // (DECLAREES/DETAILLEES DANS N_SCHEDULER.H) // RESET ALL VARIABLES NECESSARY TO THE ORDERER // (DECLARED / DETAILED IN N_SCHEDULER.H)
aet_FnJ [task->id-1 ] = task->aet; wcet_Fn_1 [task->id-1 ] = task->wcet; abs__eet[task->id-1 ] = 0,; aet_FnJ [task-> id-1] = task->aet; wcet_Fn_1 [task-> id-1] = task->wcet; abs__eet [task-> id-1] = 0 ,;
total__eet[task->id-1] = 0.;  total__eet [task-> id-1] = 0 .;
total_abs_eet[task->id-1 ] = 0.;  total_abs_eet [task-> id-1] = 0 .;
task->preemptdelay = 0.;  task-> preemptdelay = 0 .;
task->state = ready;  task-> state = ready;
task->next_deadline = task->next__persod + task->deadline;  task-> next_deadline = task-> next__persod + task-> deadline;
task->next_period = task->nexi_period + task->period; task->preemptdelay - 0.;  task-> next_period = task-> nexi_period + task-> period; task-> preemptdelay - 0 .;
task->total_preempt_duration = 0.;  task-> total_preempt_duration = 0 .;
#ifdef DEBUG  #ifdef DEBUG
printf("T%d REACHES PERIOD AT %.2f \n", task->id, sched_time);  printf ("T% d REACHES PERIOD AT% .2f \ n", task-> id, sched_time);
#endif  #endif
// SCHEDULING POINT: APPEL DE L'ORDONNANCEUR // SCHEDULING POINT: CALL FOR THE ORDERER
call_scheduler();  call_scheduler ();
// LA TACHE EST ARRETEE, ELLE SERA REDEMARREE PAR L'ORDONNANCEUR AU MOMENT // OPPORTUN  // THE TASK IS STOPPED, IT WILL BE REIMBURSED BY THE ORDERER AT THE MOMENT // OPPORTUN
pthreadjtill{*{task->pthread), SIGUSR1);  pthreadjtill {* {task-> pthread), SIGUSR1);
// ONTERMINATE // ONTERMINATE
// CET EVENEMENT INTERVIENT A LA TERMINAISON D'UNE TACHE (FIN DE LA DERNiERE // EXECUTION) // THIS EVENT INTERVENES AT THE TERMINATION OF A TASK (END OF LAST // EXECUTION)
void onTerminate(task *task) { void onTerminate (task * task) {
int i;  int i;
sched_time = get_time();  sched_time = get_time ();
tas k->state = u n ex ist in g ;  heap k-> state = a n ex ist in g;
#ifdef DEBUG  #ifdef DEBUG
printf("T%d IS TERMINATED AT %.4f and generated a slack_local[CPU%d]= %.4f n", task->id,sched_time,task->cpu, slackJocal[task->cpu]);  printf ("T% d IS TERMINATED AT% .4f and generated at slack_local [CPU% d] =% .4f n", task-> id, sched_time, task-> cpu, slackJocal [task-> cpu]);
#endif  #endif
// A CE STADE, LA SIMULATION EST TERMINEE. ON FORCE L'ARRET DE TOUTES LES // TACHES // THIS STAGE SIMULATION IS COMPLETE. ON FORCE THE STOP OF ALL // TASKS
for (i=0; i<NU _THREADS; i++)  for (i = 0; i <NU _THREADS; i ++)
pthread_cancel(*(T[i]->pthread));  pthread_cancel (* (T [i] -> pthread));
} // LE PREMIER APPEL A L'ORDONNANCEUR EST PARTICULIER DU FAIT DE LA CREATION ET DE // L'INITIALISATION DES TACHES } // THE FIRST APPEAL TO THE ORDERER IS PARTICULAR BY CREATING AND // INITIALIZING TASKS
// CE TRAITEMENT SPECIFIQUE EST CONFIE A LA FONCTION START_SCHED, CORRESPONDANT // AU PREMIER APPEL DE L'ORDONNANCEUR // THIS SPECIFIC TREATMENT IS GIVEN TO THE START_SCHED FUNCTION, CORRESPONDING // TO THE FIRST CALL OF THE ORDERER
void start_sched() { void start_sched () {
int i, j, err, rc;  int i, j, err, rc;
pid_t pid; printff\n*************Sched point:%.4f***************\n",sched_iime); gettimeofday(&start_time,NULL); // TOUTES LES TACHES SONT AFFECTEES AU CPUO pid_t pid; printff \ n ******* ** ** ** Sched point:%. 4f ** ****** ******* \ n ", sched_iime); gettimeofday (& start_time, NULL) // ALL TASKS ARE AFFECTED AT THE CPUO
cpu_set_t mask;  cpu_set_t mask;
for (j=0; j<NUM_THREADS; j++) {  for (j = 0; j <NUM_THREADS; j ++) {
CPU_ZERO(&mask);  CPU_ZERO (&mask);
CPU_SET(0, &mask);  CPU_SET (0, &mask);
/* PREVENT TASK TO USE OTHER CPUs 7 / * PREVENT TASK TO USE OTHER CPUs 7
// EMPECHER LA TACHE COURANTE DE S'EXECUTER SUR LES PROCESSEURS // PREVENT CURRENT TASK FROM EXECUTING THE PROCESSORS
AUTRES OTHER
// QUE CPUO  // THAT CPUO
for (i-1 ; i<CPU_size;  for (i-1; i <CPU_size;
CPU _CLR(i, &mask);  CPU _CLR (i, &mask);
err = pthread_setaffinity_np(*(T[j]->pthread), sizeof{cpu__set_t), &mask);  err = pthread_setaffinity_np (* (T [j] -> pthread), sizeof {cpu__set_t), &mask);
}  }
// DEFINITION DE LA LSSTE DES TACHES PRETES {LSST_READY) // DEFINITION OF LOSS OF TASKS READY {LSST_READY)
iist_ready_size - 0;  iist_ready_size - 0;
for {i=0; i<NUM_THREADS; {  for {i = 0; i <NUM_THREADS; {
lisi_ready[i] = T[i];  lisi_ready [i] = T [i];
lisi_ready_size++;  lisi_ready_size ++;
}  }
// TRI DE LA READY_ LIST PAR ORDRE DE D'ECHEANCE LA PLUS PROCHE sort(lisi_ready, list_ready_size); // SORT OF READY_ LIST BY NEAREST ORDER OF EASE (lisi_ready, list_ready_size);
#ifdef DEBUG  #ifdef DEBUG
printf("Sorted Ready ListAt");  printf ("Sorted Ready ListAt");
Displayjist(lisi__ready, lisi_ready_size); #endsf Displayjist (lisi__ready, lisi_ready_size); #endsf
// ALLOCATION DES TACHES PRETES AUX PROCESSEURS LIBRES // ALLOCATION OF TASKS READY TO FREE PROCESSORS
for (i=0; {i<CPU_size) && (i<listjready_size); i++) {  for (i = 0; {i <CPU_size) && (i <listjready_size); i ++) {
task *job = list_ready{i];  task * job = list_ready {i];
T_runningOn(job, i);  T_runningOn (job, i);
job->begintime = sched_time;  job-> begintime = sched_time;
]ob->last_preempt__time = job->begintime;  ] ob-> last_preempt__time = job-> begintime;
lastj*esumeJirne[job->id-1] = job->begintime; lastj * esumeJirne [job-> id-1] = job->begintime;
// DEMARRAGE DES TACHES PRETES SUR LES PROCESSEURS ALLOUES // STARTING TASKS READY ON PROCESSORS ALLOWED
for (i=0; (i<list_ready_size) && (i<CPU_size) ; i++) {  for (i = 0; (i <list_ready_size) && (i <CPU_size); i ++) {
pthread_mutex_!ock(&list_ready[i]->mut_wait);  ! Pthread_mutex_ ock (& list_ready [i] -> mut_wait);
pthread_cond_broadcast(&!ist_ready[i]->cond_resume);  pthread_cond_broadcast (& ist_ready [i] -> cond_resume!);
pihread_mutex_unlock{&list_ready[i]->mut_waii);  pihread_mutex_unlock {& list_ready [i] -> mut_waii);
}  }
} }
// CETTE FONCTION EST APPELEE PAR L'ORDONNANCEUR POUR CALCULER LE // THIS FUNCTION IS CALLED BY THE ORDERER TO CALCULATE THE
// CHANGEMENT DE FREQUENCE A PARTIR DU FACTEUR DE RALENTISSEMENT // FREQUENCY CHANGE FROM SLOW DOWN FACTOR
// POUR UNE TACHE ALLOUEE SUR UN PROCESSEUR, ON CALCULE LA FREQUENCE A LAQUELLE // FOR A TASK ALLOCATED ON A PROCESSOR, CALCULATE THE FREQUENCY TO WHICH
// LA TACHE PEUT ETRE EXECUTEE, ET ON CHANGE LA FREQUENCE PROCESSEUR A CETTE VALEUR  // THE TASK CAN BE EXECUTED, AND THE PROCESSOR FREQUENCY CHANGES TO THIS VALUE
void slow_down{task *job, cpu *P) { void slow_down {task * job, cpu * P) {
int i;  int i;
int procjd = P->cpu_id;  int procjd = P-> cpu_id;
char *setspeed__cpu_filename;  char * setspeed__cpu_filename;
FILE *setspeed_cpu;  FILE * setspeed_cpu;
// RECUPERER LA FREQUENCE COURANTE DU CPU CONCERNE // RECOVER THE CURRENT FREQUENCY OF THE CONCERNED CPU
Fn = {double)CPU_state_manager_query_current_freq{procjd);  Fn = {double) CPU_state_manager_query_current_freq {procjd);
#ifdef DEBUG  #ifdef DEBUG
printff'Frequency Adjusfment on CPU%d before executing T%d:\n",proc_id,job->id);  printff'Frequency Adjusfment on CPU% d before executing T% d: \ n ", proc_id, job-> id);
#endif  #endif
// CALCUL DU FACTEUR DE RALENTISSEMENT (SRjotal et SRJocal)  // CALCULATION OF DECELERATION FACTOR (SRjotal and SRJocal)
estimated_WCET_FnJ [procjd] = wcet_Fn_1 [job->id-1 ] * SR_totalIproc_id];  estimated_WCET_FnJ [procjd] = wcet_Fn_1 [job-> id-1] * SR_totalIproc_id];
estimated_AET_Fn_1 [procjd] = aet_Fn_1 [job->id-1] * S _total[proc_id]; #ifdef DEBUG estimated_AET_Fn_1 [procjd] = aet_Fn_1 [job-> id-1] * S _total [proc_id]; #ifdef DEBUG
printf(''slack_local[CPU%d]\t***\t\t\t\t\t\i\t*i*\t\t\t= %.4f\n", proc jd, siack_local[proc_id]); prinif("SRJocal[CPU%d]\t\t***\t\t\t\t\t\t\t***\t\t\t= % .4f n", proc_id, SR_local[proc_id]); printf ( '' slack_local [CPU% d] \ t * * \ t \ t \ t \ t \ t \ i \ t * i * \ t \ t \ t =% .4f \ n ", proc jd, siack_local [proc_id]); prinif ( "SRJocal [CPU% d] \ t \ t ** * \ t \ t \ t \ t \ t \ t \ t *** \ t \ t \ t =% n .4f" , proc_id, SR_local [proc_id]);
printf("SRJotal[CPU%d]\t\i***\t\t\t\i\t\t\i***\t\t\t= %.4f\n", proc jd, SRJotal[proc_id]); printf ( "SRJotal [CPU% d] \ t \ i *** \ t \ t \ t \ i \ t \ t \ i *** \ t \ t \ t =% .4f \ n", proc jd, SRJotal [proc_id]);
printf("esiimated_WCET_Fn_1 [CPU%d]= WCET[T%d]*SR_total[CPU%d]\t\t\t\t=printf ("esiimated_WCET_Fn_1 [CPU% d] = WCET [T% d] * SR_total [CPU% d] \ t \ t \ t \ t =
%.4f"%.4f\t\t= %.4f\n", % .4f "%. 4f \ t \ t =% .4f \ n",
proc_id, job->id, proc_id, wcet_Fn_1 rjob->id-1], SRJotaifprocjd], estimated__WCET_Fn_1 [procjd]);  proc_id, job-> id, proc_id, wcet_Fn_1 rjob-> id-1], SRJotaifprocjd], estimated__WCET_Fn_1 [procjd]);
printf("estimated_AET_Fn_1 [CPU%dj= AET[T%d]*SRJoial[CPU%d]\t\t\t\t= %.4f*%.4f\t\t= %.4f\n", printf ("estimated_AET_Fn_1 [CPU% dd = AET [T% d] * SRJoial [CPU% d] \ t \ t \ t \ t =% .4f *%. 4f \ t \ t =% .4f \ n",
procjd, job->id, procjd, aet_Fn_1 [job->id-1], SR_totai[proc_id], estimated_AET_Fn_1 [proc_id]);  procjd, job-> id, procjd, aet_Fn_1 [job-> id-1], SR_totai [proc_id], estimated_AET_Fn_1 [proc_id]);
#endif if{OTE ~ false) {  #endif if {OTE ~ false) {
t_available_Fn_1 [proc_id] = wcet_Fn_1 [job->id-1] + slack_local[proc_id];  t_available_Fn_1 [proc_id] = wcet_Fn_1 [job-> id-1] + slack_local [proc_id];
#ifdef DEBUG  #ifdef DEBUG
printf("t_available_Fn_1 [CPU%d]\t= WCET[T%d]+stackJocal[CPU%d]\t\t\t\t= %.4f+%.4f \t= %.4f\n",  printf ("t_available_Fn_1 [CPU% d] \ t = WCET [T% d] + stackJocal [CPU% d] \ t \ t \ t \ t =% .4f +%. 4f \ t =% .4f \ n",
proc_id, job->id, procjd, wcetJFn_1 [job->id-1], slackjocalfprocjd], t_avaHable_FnJ [procjd]);  proc_id, job-> id, procjd, wcetJFn_1 [job-> id-1], slackjocalfprocjd], t_avaHable_FnJ [procjd]);
#endif  #endif
if (job->preemption ~~ false) {  if (job-> preemption ~~ false) {
SR_loca![proc_id] = (float) {t_availabie_Fn_1 [procjd] / estimated_WCET_Fn_1 [procjd]);  SR_loca! [Proc_id] = (float) {t_availabie_Fn_1 [procjd] / estimated_WCET_Fn_1 [procjd]);
#ifdef DEBUG  #ifdef DEBUG
printf("NEW SR_local[CPU%d]\t= t_available_Fn_1[CPU%dJ/estimated_WCET_Fn_1[CPU%d]\i= %.4f/%.4ftttt= %.4f \n",  printf ("NEW SR_local [CPU% d] \ t = t_available_Fn_1 [CPU% dJ / estimated_WCET_Fn_1 [CPU% d] \ i =% .4f /%. 4ftttt =% .4f \ n",
procjd, proc_id, procjd, t_available_Fn_1 [proc_id], estimated_WCET_Fn J [procjd], SRJocal[proc_idj);  procjd, proc_id, procjd, t_available_Fn_1 [proc_id], estimated_WCET_Fn J [procjd], SRJocal [proc_idj);
#endif  #endif
>  >
else if(job->preemption == true) {  else if (job-> preemption == true) {
/* mise à jour de slack, SR, ret. CPU mis à la fréquence maximale */ siackJocal[proc_id] = 0.0;  / * Update slack, SR, ret. CPU set to maximum frequency * / siackJocal [proc_id] = 0.0;
SRJocal[proc_id] = 1.0;  SRJocal [proc_id] = 1.0;
SR_totai[proc_id] = 1.0;  SR_totai [proc_id] = 1.0;
Fn = (double)_CPU_STATE[proc_id][0].freq;  Fn = (double) _CPU_STATE [proc_id] [0] .freq;
#ifdef DEBUG printfi" PREEMPTION BY T%d OCCURED AT TIME=%.4f, slackjoca!=0.0, SR_local=1.0, SR_total=1.0, Fn=maxfreq\n", job->id, sched_iirne); #ifdef DEBUG printfi "PREEMPTION BY T% d OCCURED AT TIME =%. 4f, slackjoca! = 0.0, SR_local = 1.0, SR_total = 1.0, Fn = maxfreq \ n", job-> id, sched_iirne);
#endif  #endif
}  }
}  }
if <SRJocal[procjd] >= 1.00) {  if <SRJocal [procjd]> = 1.00) {
#ifdef DEBUG  #ifdef DEBUG
printffNEW WCET(T%d]\t\t= WCET_Fn_1 [T%d]*SRJocal[CPU%d]\t\t\t\t= %.4f*%.4ftt\t= ", printffNEW WCET (T% d] \ t \ t = WCET_Fn_1 [T% d] * SRJocal [CPU% d] \ t \ t \ t \ t =% .4f * % .4ftt \ t = ",
job->id, job->id, procjd, wceî_Fn_1 [job->id-13, SRJocal[proc_id]); #endif  job-> id, job-> id, procjd, wceî_Fn_1 [job-> id-13, SRJocal [proc_id]); #endif
wceLFnJ [job->!d-1] = (float)(wcel_Fn^1 [job->id-1 ] * SR_local(proc_id]); wceLFnJ [job ->! d-1] = (float) (wcel_Fn ^ 1 [job-> id-1] * SR_local (proc_id));
#ifdef DEBUG  #ifdef DEBUG
printf("%.4f\n", wcet_Fn_1 jjob->id-1 ]);  printf ("%. 4f \ n", wcet_Fn_1 jjob-> id-1]);
#endif  #endif
#ifdef DEBUG  #ifdef DEBUG
printf("NEW AET[T%d]\t\t= AET_Fn^1 [T%d]*SRJocal[CPU%d]\t\i\t\t= %.4f*%.4Mt= job->id, job->td, procjd, aet_Fn_1Dob->id-1], SRJocal[procjd]); printf ("NEW AET [T% d] \ t \ t = AET_Fn ^ 1 [T% d] * SRJocal [CPU% d] \ t \ i \ t \ t =% .4f * % .4Mt = job-> id, job-> td, procjd, aet_Fn_1Dob-> id-1], SRJocal [procjd]);
#endif  #endif
aet_Fn_1 [job->id-1] = (doubie)(aet_Fn_1 [job->id-1] * SRJocal[proc_id]); aet_Fn_1 [job-> id-1] = (double) (aet_Fn_1 [job-> id-1] * SRJocal [proc_id]);
#ifdef DEBUG  #ifdef DEBUG
printf("%.4f n", aet_Fn_1 [job->id-1 ]);  printf ("%. 4f n", aet_Fn_1 [job-> id-1]);
#endif  #endif
}  }
else if (S RJocal [procjd] < 1 .00) {  else if (S RJocal [procjd] <1 .00) {
#ifdef DEBUG  #ifdef DEBUG
printffNEW WCET[T%d]\t\t= esi_WCET_Fn_1 [CPU%dfSRJocal[CPU%d]\t\t\t= %.4f*%.4f\t\t= ", printffNEW WCET [T% d] \ t \ t = esi_WCET_Fn_1 [CPU% dfSRJocal [CPU% d] \ t \ t \ t =% .4f * % .4f \ t \ t = ",
job->id, procjd, procjd, estimated_WCET_Fn_1 [procjd], SR_iocai[procjd]); #endif  job-> id, procjd, procjd, estimated_WCET_Fn_1 [procjd], SR_iocai [procjd]); #endif
wcet_Fn_1[job->id-1] - (estimated_WCET_Fn_1 [procjd] * SRJocal[procjd]);  wcet_Fn_1 [job-> id-1] - (estimated_WCET_Fn_1 [procjd] * SRJocal [procjd]);
#ifdef DEBUG  #ifdef DEBUG
printf{"%.4f\n", wcet_Fn_1 [job->id-1]);  printf {"%. 4f \ n", wcet_Fn_1 [job-> id-1]);
#endif  #endif
#ifdef DEBUG  #ifdef DEBUG
printf("NEW AET[T%d]\t\i= est_AET_Fn_1 [CPU%d]*SR_local[CPU%d]\t\t\t= %.4f*%.4f\t\t= ", printf ("NEW AET [T% d] \ t \ i = is_AET_Fn_1 [CPU% d] * SR_local [CPU% d] \ t \ t \ t =% .4f * % .4f \ t \ t =",
job->id, procjd, procjd, estimated_AET_Fn_1 [procjd], SRJocal[proc_id]); #endif  job-> id, procjd, procjd, estimated_AET_Fn_1 [procjd], SRJocal [proc_id]); #endif
aet_Fn_1 [job->id-1J = (estimated__AET_Fn_1 [procjd] * SRJocalJprocJdj); #ifdef DEBUG aet_Fn_1 [job-> id-1J = (estimated__AET_Fn_1 [procjd] * SRJocalJprocJdj); #ifdef DEBUG
printf("%.4f\n", aet_Fn_1 |j"ob->id-1 ]): printf ("%. 4f \ n", aet_Fn_1 | j " ob-> id-1]):
#endif  #endif
}  }
/* CALCUL DE LA NOUVELLE FREQUENCE "THEORIQUE" / * CALCULATION OF THE NEW "THEORETICAL" FREQUENCY
double Fn_1 = (double)(Fn / SRJocal[proc_idJ);  double Fn_1 = (double) (Fn / SRJocal [proc_idJ);
// RECHERCHE DE LA NOUVELLE FREQUENCE EFFECTIVE {LES FREQUENCE PROCESSEUR // SONT PREDEFINIES, DONC DIFFERENTES DE LA FREQUENCE THEORIQUE) // SEARCH FOR THE NEW EFFECTIVE FREQUENCY {THE PROCESSOR FREQUENCY // ARE PREDEFINED, THEREFORE DIFFERENT FROM THEORETICAL FREQUENCY)
Elementary_state State;  Elementary_state State;
State = __CPU_STATE[procid]fO]; State = __CPU_STATE [proc " id] fO];
for (i-0; i<NB_STATES_ CPU-1 ; i++)  for (i-0; i <NB_STATES_ CPU-1; i ++)
if ( ((int)Fn_K=_CPU„STATE[proc_id][i].freq) && ((int)Fnm1 <=_CPU_STATE[proc_id][i+1].freq) ) if ((int) Fn_K = _CPU "STATE [proc_id] [i] .freq) && ((int) Fn m 1 <= _ CPU_STATE [proc_id] [i + 1] .freq))
State = _CPU_STATE[proc_id][i+1 ï;  State = _CPU_STATE [proc_id] [i + 1 ï;
#ifdef DEBUG #ifdef DEBUG
printf("FREQUENCY TO SWiTCH (%.0f): %d\t", Fn_1 , (int)State.freq);  printf ("FREQUENCY TO SWiTCH (% .0f):% d \ t", Fn_1, (int) State.freq);
#endif  #endif
// APPEL DE LA FONCTION DE CHANGEMENT DE FREQUENCE  // CALL FOR FREQUENCY CHANGE FUNCTION
CPU_state_manager apply_state{ proc_id, &State );  CPU_state_manager apply_state {proc_id, &State);
// LE FACTEUR DE RALENTISSEMENT DOIT ETRE RECALCULE EN TENANT COMPTE DU // CHANGEMENT EFFECTIF DE LA FREQUENCE PROCESSEUR  // THE SLOW DOWN FACTOR MUST BE RECALCULATED WITH EFFECTIVE CHANGE IN FREQUENCY PROCESSOR
S RJocal [procjd] = Fn / State.freq;  S RJocal [procjd] = Fn / State.freq;
SRJotallproc jd] = SR_total[proc_id] * SR_local[procjd];  SRJotallproc jd] = SR_total [proc_id] * SR_local [procjd];
#ifdef DEBUG  #ifdef DEBUG
printf("NEW SRJocal[CPU%d]\t= Fn / State.freq = %.0f / %d = %.4f n", proc jd, Fn, (int)State.freq, SRJocal[proc_idJ);  printf ("NEW SRJocal [CPU% d] \ t = Fn / State.freq =% .0f /% d =% .4f n", proc jd, Fn, (int) State.freq, SRJocal [proc_idJ);
printf("NEW SR_total[CPU%d]\t= SR_total[CPU%d] * SRJocal[CPU%d] = %.4f\n", procjd, procjd, procjd, SR_totai[proc_id]); printf ("NEW SR_total [CPU% d] \ t = SR_total [CPU% d] * SRJocal [CPU% d] =% .4f \ n", procjd, procjd, procjd, SR_totai [proc_id]);
#endif  #endif
// REMISE DU SLACK A ZERO CAR ON VIENT DE CONSOMMER LE SLACK PRECEDENT POUR // DIMINUER LA FREQUENCE // REMOVE THE SLACK TO ZERO BECAUSE YOU SAVE THE PREVIOUS SLACK TO // DECREASE FREQUENCY
slackjocalfprocj'd] = 0.0; slackjocalfprocj ' d] = 0.0;
} N_utifs.c } N_utifs.c
#define _GNU_SOURCE  #define _GNU_SOURCE
#inciude <stdio.h> #inciude <stdio.h>
#include <unistd.h> #include <unistd.h>
#inciude <sys/time.h> #inciude <sys / time.h>
#include <signal.h> #include <signal.h>
#include <sched.h> #include <sched.h>
#include <time.h> #include <time.h>
#include <errno.h> #include <errno.h>
#include <pthread.h> #include <pthread.h>
#include "N_utils.h" #include "N_utils.h"
#define TARGET_CPU_DEFINED #define TARGET_CPU_DEFINED
#i ncl ude " .. /p m_d ri vers/i ntel_coreJ5_m 520. h " #i ncl ude ".. / p m_d ri to / i ntel_coreJ5_m 520. h"
#define DEBUG #define DEBUG
// FONCTION D'INITIALISATION DES PROCESSEURS DE LA PLATEFORME // INITIALIZATION FUNCTION OF PROCESSORS OF THE PLATFORM
void init_CPUs() void init_CPUs ()
{  {
printf("\nlntializing CPUs... \n");  printf ("\ nlntializing CPUs ... \ n");
int i;  int i;
// FAIRE LA LISTE DES PROCESSEURS DISPONIBLES ET METTRE A JOUR LES ETATS CPUs for (i=0;i<CPU_size;ï++) {  // MAKE THE LIST OF PROCESSORS AVAILABLE AND UPDATE THE CPU STATES for (i = 0; i <CPU_size; ı ++) {
CPUs[i].cpu_id = i;  CPUs [i] .cpu_id = i;
CPUs[i].state = IDLE;  CPUs [i] .state = IDLE;
}  }
printffSystem has %d processor(s)\n",NB_CPU_MAX);  printffSystem has% d processor (s) \ n ", NB_CPU_MAX);
printf{"System uses %d processor(s)\n",CPU_size);  printf {"System uses% d processor (s) \ n", CPU_size);
#ifdef DEBUG  #ifdef DEBUG
for (i=0; i<CPU_size; {  for (i = 0; i <CPU_size; {
printf("CPU%d is ", CPUs[i].cpu_id);  printf ("CPU% d is", CPUs [i] .cpu_id);
if (CPUs[i].state == RUNNING) printf("RUNNING\n");  if (CPUs [i] .state == RUNNING) printf ("RUNNING \ n");
else if(CPUs[i].staie == STAND_BY) printf("STAND BY\n");  else if (CPUs [i] .store == STAND_BY) printf ("STAND BY \ n");
else if (CPUsfsJ.state == IDLE) printf("IDLE\n");  else if (CPUsfsJ.state == IDLE) printf ("IDLE \ n");
efse if (CPUs[i].state == SLEEP) printf("SLEEP\n");  efse if (CPUs [i] .state == SLEEP) printf ("SLEEP \ n");
else if (CPUs[i].state == DEEP^SLEEP) printf("DEEP_SLEEP\n"); else if (CPUs [i] .state == DEEP ^ SLEEP) printf ("DEEP_SLEEP \ n");
else printf{"UNDEF!NED\n"); #endif else printf {"UNDEF! NED \ n"); #endif
// OUVERTURE DES FICHIERS POUR LE CHANGEMENT DE FREQUENCE (API CPUfreq) printffSetting cpufreq\n"); for (i=0; i<CPU__size; { // OPEN FILES FOR FREQUENCY CHANGE (CPUfreq API) printffSetting cpufreq \ n "); for (i = 0; i <CPU__size; {
setjgovernor(CPUs[i].cpu_id,"userspace");  setjgovernor (CPUs [i] .cpu_id, "userspace");
open_cpufreq(i);  open_cpufreq (i);
}  }
// ON DEMARRE A LA PFREQUENCE MAX (SPECIFQUE A L'ALGORITHME DSF) printff'Setting CPUs to maximum frequency\n"); // START MAX FADE (SPECIFIC TO DSF ALGORITHM) printff'Setting CPUs to maximum frequency \ n ");
for (i=0; i<CPU_size; i++)  for (i = 0; i <CPU_size; i ++)
CPU_state_manager_apply__state{ i, &_CPU_STATE[i][0] ); printf("End CPUs initializationAn");  CPU_state_manager_apply__state {i, & _CPU_STATE [i] [0]); printf ("End CPUs initializationAn");
} }
// FONCTION DE LIBERATION DES PROCESSEURS DE LA PLATEFORME // FUNCTION OF RELEASE OF PLATFORM PROCESSORS
voîd exit_CPUs() void exit_CPUs ()
{  {
int i;  int i;
// FERMETURE DES FICHIERS POUR LE CHANGEMENT DE FREQUENCE (API CPUfreq) for {i=0; i<CPU_size; i++)  // CLOSURE FILES FOR FREQUENCY CHANGE (CPUfreq API) for {i = 0; i <CPU_size; i ++)
close cpufreq(i);  close cpufreq (i);
}  }
// FONCTION D'INITIALISATION DES TACHES APPLICATIVES // FUNCTION OF INITIALIZATION OF APPLICATIVE TASKS
void init_TASKs(task task[NUM_THREADS], pthreadj thread[NUM_THREADS]) void init_TASKs (task_tool [NUM_THREADS], pthreadj thread [NUM_THREADS])
{  {
int i, rc;  int i, rc;
struct sched__param my_sched_params; printi("\ninitiaHzing tasks.,.\n");  struct sched__param my_sched_params; printi ("\ ninitiaHzing tasks.,. \ n");
printff'Application has %d task(s)\n", NUM_THREADS);  printff'Application has% d task (s) \ n ", NUM_THREADS);
// SAUVEGARDE DU WCET POUR LA FREQUENCE COURANTE  // SAVING THE WCET FOR CURRENT FREQUENCY
for (i=0; i<NUMJ"HREADS; i++) wcet_Fn_1 [i] = task[i].wcet; for (i = 0; i <NUMJ " HREADS; i ++) wcet_Fn_1 [i] = task [i] .wcet;
// INITIALISATION DES MUTEX, UTILISES COMME MECANISME DE PREEMPTION pi h read_m utexj nit( &s ched_m ut_wait, ULL); // INITIALIZATION OF MUTEX, USED AS PREEMPTION MECHANISM pi h read_m utexj nit (& s ched_m ut_wait, ULL);
for (F0; i<NUM„THREADS; { for (F0; i <NUM "THREADS;
pthread__mutex_inii(&task[i],mut_wait, NULL);  pthread__mutex_inii (& task [i], mut_wait, NULL);
pt read_cond_init(&task[r].cond_resume, NULL);  pt read_cond_init (& task [r] .cond_resume, NULL);
}  }
// INITIALISATION DES PARAMETRES DE TACHES  // INITIALIZING THE PARAMETERS OF TASKS
for (i=0; i<NUM_THREADS; i++) for (i = 0; i <NUM_THREADS; i ++)
{  {
T[i] = &task|i];  T [i] = & task | i];
T[i]->pthread = &thread[i];  T [i] -> pthread = & thread [i];
nb_exec[i] = 0;  nb_exec [i] = 0;
tota[_eet[i] = 0.0;  tota [_eet [i] = 0.0;
total_abs_eet[i] = 0.0;  total_abs_eet [i] = 0.0;
}  }
// INITIALISATION DES PARAMETRES DE L'ORDONNANCEUR  // INITIALIZING THE ORDER PARAMETERS
for (i=0; i<CPU_size; i++) for (i = 0; i <CPU_size; i ++)
{  {
s[ack_local[i] ~ 0.0;  s [ack_local [i] ~ 0.0;
SRJocalfi] = 1.0;  SRJocalfi] = 1.0;
SR_total[i] = 1.0;  SR_total [i] = 1.0;
Lavailable_Fn_1 [i] = 0.0;  Lavailable_Fn_1 [i] = 0.0;
estimaied_WCET_Fn_1 [i] = 0.0;  estimaied_WCET_Fn_1 [i] = 0.0;
estimated_AET_Fn_1 [i] = 0.0;  estimated_AET_Fn_1 [i] = 0.0;
}  }
STAND_BY_to_RUNNING=0; STAND_BY_to_RUNNING = 0;
SLEEPjo_RUNNiNG=0;  SLEEPjo_RUNNiNG = 0;
DEEP_SLEEPJo_RUNNING=0;  DEEP_SLEEPJo_RUNNING = 0;
IDLE_to_RUNN!NG=0;  IDLE_to_RUNN NG! = 0;
Preemption_counter=0; nb_sched_call = 0;  Preemption_counter = 0; nb_sched_call = 0;
rdv = 0; printf("End tasks irtitializationAn"); // FONCTION PERMETTANT DE PREEMPTER UNE TACHE rdv = 0; printf ("End tasks irtitializationAn"); // FUNCTION TO PREPARE A TASK
void T_preempt(task *task) void T_preempt (task * task)
{  {
int i, rc;  int i, rc;
struct sched__param mymsched_params; printf("T%d[CPU%d] preempted at %.4f with ABS_EET=%.4f \n", iask->id, task->cpu, sched time, abs__eet[task->id-1 ]); struct sched__param my m sched_params; printf ("T% d [CPU% d] preempted at% .4f with ABS_EET =%. 4f \ n", iask-> id, task-> cpu, sched_time, abs__eet [task->id-1]);
Preemption_counter++;  Preemption_counter ++;
task->lastj?reempt_time = sched_time;  task-> lastj? reempt_time = sched_time;
task->preemption = 1 ;  task-> preemption = 1;
task->preemptdelay = 0.;  task-> preemptdelay = 0 .;
task->state = ready;  task-> state = ready;
CPUs[task->cpu].state = IDLE;  CPUs [task-> cpu] .state = IDLE;
task->cpu = -1 ;  task-> cpu = -1;
// LE MECANISME DE PREMPTiON CONSISTE A ENVOYER UN SIGNAL A DESTINATION DE // LA TACHE // THE PRECAUTION MECHANISM CONSISTS TO SEND A SIGNAL WITH A DESTINATION OF // THE TASK
pthread_kil!(*(task->pthread)J SIGUSR1 ); pthread_kil! (* (task-> pthread) J SIGUSR1);
// LORSQUE CELLE-CI LE REÇOIT, LE SIGNAL HANDLER sigusri EXECUTE  // WHEN THIS RECEIVES IT, THE SIGNAL HANDLER sigusri EXECUTE
// PTHREAD_COND_WAIT QUI A POUR EFFET DE SUSPENDRE LA TACHE  // PTHREAD_COND_WAIT THAT HAS THE EFFECT OF SUSPENDING THE TASK
// VOIR sigusri dans APPLiCATiON.C  // SEE sigusri in APPLiCATiON.C
}  }
// ALLOUE UNE TACHE A UN PROCESSUR // ALLOWS A TASK TO A PROCESSOR
void T_runningOn{task *task, int cpujd) void T_runningOn {task * task, int cpujd)
{ int i, j, err; {int i, j, err;
cpu_set_t mask;  cpu_set_t mask;
CPU _ZERO(&mask); CPU _ZERO (&mask);
Γ SET TASK TO CPUJD */ Γ SET TASK TO CPUJD * /
// Alloue la tâche 'task' au processeur numéro 'cpujd'  // Allocate the 'task' task to processor 'cpujd'
CPU_SET(cpujd, &mask);  CPU_SET (cpujd, &mask);
/* PREVENT TASK TO USE OTHER CPUs 7 / * PREVENT TASK TO USE OTHER CPUs 7
// Faire en sorte que la tâche 'task' ne puisse s'exécuter sur aucun autre processeur for (i=0; i<CPU_size; i++)  // Make sure that the 'task' task can not run on any other processor for (i = 0; i <CPU_size; i ++)
if (i!=cpujd) CPU jOLRii, &mask);  if (i! = cpujd) CPU jOLRii, &mask);
err = pthread_setaffinity__np(*(iask->pthread), sizeof(cpu_set__t), &mask);  err = pthread_setaffinity__np (* (iask-> pthread), sizeof (cpu_set__t), &mask);
tf (err != 0) { printf (1 ) PTH RE AD_S ET AFFI ITY RETURNED; %d \n", err); exit(0); > task->cpu = cpujd; tf (err! = 0) {printf (1) PTH RE AD_S AND AFFI ITY RETURNED; % d \ n ", err); exit (0); task-> cpu = cpujd;
task->state = running;  task-> state = running;
CPUs[cpujd].state = RUNNING;  CPUs [cpujd] .state = RUNNING;
nbj3xec|cpujd]++;  nbj3xec | cpujd] ++;
/* PREVE T ALL OTHER TASK TO USE CPUJD*/ / * PREVE ALL OTHER TASK TO USE CPUJD * /
// Faire en sorte que toute tâche autre que 'task' ne puissent pas s'exécuter  // Make sure that any task other than 'task' can not run
// sur le processeur numéro cpu_id  // on the processor number cpu_id
for {i=0; i<NUM_THREADS; i++)  for {i = 0; i <NUM_THREADS; i ++)
if (T[i]->id != task->id) {  if (T [i] -> id! = task-> id) {
err = pthread_getaffinity_np{*{T[!]->pthread), sizeof{cpu_set_i), &mask);  err = pthread_getaffinity_np {* {T [!] -> pthread), sizeof {cpu_set_i), &mask);
if (err != 0) { printf ("**** (2) PTHREAD_SETAFFINITY RETURNED: %d \n", err); exit(0); } if (err! = 0) {printf (" **** (2) PTHREAD_SETAFFINITY RETURNED:% d \ n", err); exit (0); }
if ( (T[i]->id != task->id) )  if ((T [i] -> id! = task-> id))
CPlLCLR(cpu_td, &mask);  CPlLCLR (cpu_td, &mask);
/* CHECK THAÏ EACH THREAD CAN NOT USE / * CHECK THAI EACH THREAD CAN NOT USE
MORE THAN ONE CPU (default: CPUO) */  MORE THAN ONE CPU (default: CPUO) * /
// Vérifier que chaque tache ne peut pas utiliser plus d'un seul CPU {par défaut, CPUO) short cpu_aiiocated = 0;  // Check that each task can not use more than one CPU {default, CPUO) short cpu_aiiocated = 0;
for (j=0; j<CPUsize; j++) for (j = 0; j <CPU " size; j ++)
if ( CPUJSSET(j, &mask) } cpu_allocated++;  if (CPUJSSET (j, & mask)} cpu_allocated ++;
if (cpu_allocaied == 0) CPU_SET(0, &mask);  if (cpu_allocaied == 0) CPU_SET (0, &mask);
else if {cpu_allocated > 1 ) printffWARNiNG: T%d IS ALLOCATED MORE THAN ONE CPU\n", T[i]->id);  else if {cpu_allocated> 1) printffWARNiNG: T% d IS ALLOCATED MORE THAN ONE CPU \ n ", T [i] -> id);
err = pthread_setaffinity_np(*(T[i]->pthread), sizeof(cpu_setJ), &mask);  err = pthread_setaffinity_np (* (T [i] -> pthread), sizeof (cpu_setJ), &mask);
if (err != 0) { printf ("**** (3) PTH RE AD_S ET AF F ! ITY RETURNED: %d \n", err); exit(0); } if (err! = 0) {printf (" **** (3) PTH RE AD_S AND AF F! ITY RETURNED:% d \ n", err); exit (0); }
}  }
} }
// FONCTION PERMETTANT DE TESTER SI UN PROCESSEUR EST UTILISE {1 ) OU NON (0) int P_isRunning(int cpujd) // FUNCTION TO TEST IF A PROCESSOR IS USED {1) OR NOT (0) int P_isRunning (int cpujd)
{  {
int i; int cpu_isRunning = 0;  int i; int cpu_isRunning = 0;
if (CPUs[cpu_id].state == RUNNiNG) cpujsRunning = 1 ;  if (CPUs [cpu_id] .state == RUNNiNG) cpujsRunning = 1;
return cpujsRunning;  return cpujsRunning;
} // TRI D'UNE LISTE DE TACHES PAR ORDRE DE D'ECHEANCE CROISSANTE (LA PLUS PROCHE EN // PREMIER) } // SORTING A LIST OF TASKS BY ORDER OF INCREASING DEATH (NEAREST IN // FIRST)
void sort{iask *list[J, int list_size) void sort {iask * list [J, int list_size)
{  {
task *temp;  task * temp;
int i, jt min; int i, j t min;
for (i = 0; i < list__size- 1 ;  for (i = 0; i <list__size-1;
{  {
min = i;  min = i;
for (j=i+1 ; j < list_size; j++)  for (j = i + 1; j <list_size; j ++)
{  {
if (Hst[j]->next_deadline < !ist[min]->next_deadline)  if (Hst [j] -> next_deadline <! ist [min] -> next_deadline)
{  {
m.in=j;  m.in = j;
}  }
>  >
if (i != min)  if (i! = min)
{  {
temp=list[i];  temp = list [i];
iist[i]=iist[min];  iist [i] = iist [min];
iist[minj=temp;  iist [minj = Temp;
}  }
}  }
// FONCTION QUI RETOURNE LE TEMPS ECOULE DEPUIS LE DEBUT DE LA SIMULATION double get_time{) // FUNCTION THAT RETURNS THE TIME SPENT FROM THE BEGINNING OF THE SIMULATION double get_time {)
{  {
double îime;  double;
struct ismeval currenijime; ïf {gettimeofday(&currentJirne,NULL) == -1 )  struct ismeval currenijime; ïf {gettimeofday (& currentJirne, NULL) == -1)
{  {
putsfERROR : getJime_ ofday");  putsfERROR: getJime_ ofday ");
return -1 ; } return -1; }
time = current_time.tv_.sec - start_time.iv_sec \  time = current_time.tv_.sec - start_time.iv_sec \
+0.000001 *(current_time.tv_usec - start_tirr¾e.tv_usec); return time;  +0.000001 * (current_time.tv_usec - start_tirr¾e.tv_usec); return time;
}  }
// TACHES APPLICATIVES: // APPLICATION TASKS:
// LES TACHES APPLICATIVES SE DIVISENT EN DEUX PHASES:  // THE APPLICATIVE TASKS DIVIDE INTO TWO PHASES:
// - UNE PHASE D'EXECUTION EFFECTIVE (CORRESPONDANT A usertask_actualexec) // - AN EFFECTIVE EXECUTION PHASE (CORRESPONDING TO usertask_actualexec)
// - UNE PHASE D'ATTENTE (CORRESPONDANT A usertask_sleepexec). LA TACHE DOIT ATTENDRE // LE TEMPS DE SA PERIODE AVANT DE POUVOIR ETRE A NOUVEAU PRETE POUR REEXECUTION  // - A WAIT PHASE (CORRESPONDING TO usertask_sleepexec). THE TASK MUST EXPECT // THE TIME OF ITS PERIOD BEFORE I CAN BE READYED FOR RE-EXECUTION
int usertask_actualexec(task *task) int usertask_actualexec (task * task)
{ {
double duration;  double duration;
double data[262144J;  double data [262144J;
int i = 0;  int i = 0;
nbTicks++; task->begintime - sched_time;  tickCount ++; task-> begintime - sched_time;
#ifdef DEBUG  #ifdef DEBUG
printf("T%d Starts exécution on CPU%d with AET = %.4f (*%.4f=%.4f) at %.4An", printf ("T% d Starts run on CPU% d with AET =% .4f ( * % .4f =%. 4f) at% .4An",
task->id, task->cpu, task->ret, SR_total[task->cpu], task->ret*SRJotal[task->cpu], get_time()/*task->begintime*/); task-> id, task-> cpu, task-> ret, SR_total [task-> cpu], task-> ret * SRJotal [task-> cpu], get_time () / * task-> begintime * /);
#endif  #endif
// LES TACHES EFFECTUENT UN SIMPLE ACCES TABLEAU / MULTIPLICATION ENTIERE // JUSQU'A ATTEINDRE ΙΆΕΤ  // TASKS MAKE A SIMPLE ACCESS TABLE / COMPLETE MULTIPLICATION // UP TO REACH ΙΆΕΤ
// (EN TE ANT COMPTE DES PREEMPTiONS ET CHANGEMENT DE FREQUENCE POSSIBLES) do  // (TAKING INTO ACCOUNT POSSIBLE PRE-OPTIONS AND FREQUENCY CHANGE)
{  {
duration - get_îime() - task->begintime;  duration - get_ime () - task-> begintime;
daia[i%262144j = duration*i++;  daia [i% 262144j = duration * i ++;
} while (duration < iotal_abseet[task->id-1] + task->total_preempt_duraiion + (task->ret * SR_totai[task->cpu3) + offset_delay[task->id-1 ]); } while (duration <iotal_abs " eet [task-> id-1] + task-> total_preempt_duraiion + (task-> ret * SR_totai [task-> cpu3) + offset_delay [task->id-1]);
task_duration[task->id-1J = duration; return 0;  task_duration [task-> id-1J = duration; return 0;
} int usertask_s!eepexec(task *îask) double sleep - task->next_period - schedjime; } int usertask_s! eepexec (task * îask) double sleep - task-> next_period - schedjime;
if(sleep < 0)  if (sleep <0)
sleep = 0.0;  sleep = 0.0;
usleep({unsigned iong)(1000000*sleep));  usleep ({unsigned iong) (1000000 * sleep));
task->preemptdelay = 0; return 0;  task-> preemptdelay = 0; return 0;
}  }
// FONCTIONS D'AFFICHAGE UTILISEE POUR LE DEBUG // DISPLAY FUNCTIONS USED FOR DEBUG
void DispiayQ void DispiayQ
{  {
int i; sched_time=getjime();  int i; sched_time getjime = ();
printf("Scheduling point: %.4f, sched_time);  printf ("Scheduling point:% .4f, sched_time);
printf("\nTask \t cpu \t ihread_pid \t nxt ddlne \t prio \t starts at \t stops at \t idle until \t state\n");  printf ("\ nTask \ t cpu \ t ihread_pid \ t nxt ddlne \ t prio \ t starts at \ t stops at \ t idle until \ t state \ n");
for (i=0; i<NUM_THREADS; i++)  for (i = 0; i <NUM_THREADS; i ++)
printf("T%d \t %d \t %d \t %.4f \t \t %d \t %,4f \t \t %.4f \t \t %.4f \t\t %d\n",  printf ("T% d \ t% d \ t% d \ t% .4f \ t \ t% d \ t%, 4f \ t \ t% .4f \ t \ t% .4f \ t \ t% d \not",
T[i]->id, T[i]->cpu, T[i]->thread_pid, T[t]->next_dead!ine,  T [i] -> id, T [i] -> cpu, T [i] -> thread_pid, T [t] -> next_dead! Ine,
T[i]->prio, T[i3->begintime, T[i]->endtime, sched_time, T[i]->state);  T [i] -> prio, T [i3-> begintime, T [i] -> endtime, sched_time, T [i] -> state);
printf(" printf ( "
**\n"); * * \ n ");
} void Dispiayjist{task *listfl. int list_size) } void Dispiayjist {task * listfl. i nt list_size)
{ {
int i;  int i;
printf("\t Task \t Next_deadline\n");  printf ("\ t Task \ t Next_deadline \ n");
for (i = 0; i < list_size; i++)  for (i = 0; i <list_size; i ++)
printf("\i \t \t \t T%d \t (%.4f) \n", list[i]->id, list[i]->next_deadline);  printf ("\ i \ t \ t \ t% d \ t (% .4f) \ n", list [i] -> id, list [i] -> next_deadline);
printf("\n");  printf ( "\ n");
} N applica tion. c } N application. vs
#define _GNU_SOURCE  #define _GNU_SOURCE
#include <sched.h> #include <sched.h>
#include <time.h> #include <time.h>
#include <stdio.h> #include <stdio.h>
#include <stdlib.h> #include <stdlib.h>
#include <pthread.h> #include <pthread.h>
#inciude <sys/time.h> #inciude <sys / time.h>
#include <signal.h> #include <signal.h>
#include <string.h> #include <string.h>
#include <sched.h> #include <sched.h>
#include <sys/types.h> #include <sys / types.h>
#include <sys/syscaSi.h> #include "N utils. h" #include <sys / syscaSi.h> #include "N utils h"
// L'EXEMPLE APLICATIF UTILISE ICI EST TIRE D'UNE APPLICATION VIDEO H264 ENCODEUR r // THE APLICATIVE EXAMPLE USED HERE IS TAKEN FROM A VIDEO APPLICATION H264 ENCODER r
4 TASKS / 2 CPUS  4 TASKS / 2 CPUS
- T1 -> Motion Estimation #1 Estimation de mouvement sur la demi-image numéro 1  - T1 -> Motion Estimation # 1 Motion estimation on half-image number 1
- T2 -> Motion Estimation #2 Estimation de mouvement sur la demi-image numéro 2  - T2 -> Motion Estimation # 2 Motion estimation on half-image number 2
- T3 -> Inv. Prédiction + Texture Encoding + Syntax Writing Prédiction inverse + Encodage Texture + Ecriture syntaxique  - T3 -> Inv. Prediction + Texture Encoding + Syntax Writing Reverse Prediction + Texture Encoding + Syntax Writing
- T4 -> Loop Filter Filtre de déblocage  - T4 -> Loop Filter Unlock filter
TH!S VERSION REQUIRES 2 CPUs Cet exemple nécessite 2 CPUS  TH! S VERSION REQUIRES 2 CPUs This example requires 2 CPUS
*/ statîc task Task[NUM_THREADS]= {  * / statis task task [NUM_THREADS] = {
/* Task[0] = */{  / * Task [0] = * / {
1 , // id  1, // id
20.63e-1 , // wcet  20.63e-1, // wcet
5.65e-1 , // bcet  5.65e-1, // bcet
10.40e-1 , // aet  10.40e-1, // aet
0, //ret  0, // ret
21 .0e-1 , // deadline  21 .0e-1, // deadline
0, // nex_deadline  0, // nex_deadline
-1 , // cpu  -1, // cpu
40.0e-1 , // period  40.0e-1, // period
0, // next_period  0, // next_period
-1 , // begintime -1, //endtime -1, // begintime -1, // endtime
0, // préemption  0, // preemption
0, // preemptdeiay  0, // preemptdeiay
0.0, // last_preemp_time 0.0, // last_preemp_time
0.0, // totaLpreempt_duration unexisting, // staie 0.0, // totaLpreempt_duration unexisting, // stale
PTHREAD_ UTEXJNITIALIZER,  PTHREAD_ UTEXJNITIALIZER,
PT H READ_CONDJ NIT!ALIZER,  PT H READ_CONDJ NIT! ALIZER,
NULL, // adresse du pthread associé NULL, // associated pthread address
0 // t read_pid 0 // t read_pid
}.  }.
/*Task[1] = 7{ / * Task [1] = 7 {
2, // id  2, // id
20.63e-1, //wcet 20.63e-1, // wcet
5.65e-1 , // bcet  5.65e-1, // bcet
10.50e-1, //aet  10.50e-1, // aet
0, /ret  0, / ret
21.0Θ-1, //deadline  21.0Θ-1, // deadline
0, // next dead!ine  0, // next dead! Ine
-1 , // cpu  -1, // cpu
40.0e-1, //period 40.0e-1, // period
0, // next_period  0, // next_period
-1, // beginttme  -1, // beginttme
-1, //endtime  -1, // endtime
0, // préemption  0, // preemption
0, // preemptdelay  0, // preemptdelay
0.0, // tast_preempt__time  0.0, // tast_preempt__time
0.0, // total_preempt_du ration unexisting, // state  0.0, // total_preempt_from the unexisting ration, // state
PTHREAD__MUTEX_!NITIALÎZER,  PTHREAD__MUTEX_! NITIALÎZER,
PTHREAD_CONDJN!TIALIZER,  PTHREAD_CONDJN! TIALIZER,
NULL, // adresse du pthread associé NULL, // associated pthread address
0 // threadjaid 0 // threadjaid
}.  }.
Γ Task[2] = { Γ Task [2] = {
3, // id 8.25e-1, //wcet 3, // id 8.25e-1, // wcet
3.38e-1, //bcet  3.38e-1, // bcet
5.96e-1, //aet  5.96e-1, // aet
0, //ret  0, // ret
31.0e-1, //deadline  31.0e-1, // deadline
0, // next deadline  0, // next deadline
-1 , // cpu  -1, // cpu
40.0e-1, //period 40.0e-1, // period
0, // next_period  0, // next_period
-1 , // begintime  -1, // begintime
-1, // endtime  -1, // endtime
0, // préemption  0, // preemption
0, // preemptdelay  0, // preemptdelay
0.0, // !ast_preempt_time 0.0, //! Ast_preempt_time
0.0, // total_preempt_duration unexisting, // state 0.0, // total_preempt_duration unexisting, // state
PTHREAD_MUTEX_iN!T)ALIZER,  PTHREAD_MUTEX_iN! T) ALIZER,
PTHREAD_CONDJNITIALIZER,  PTHREAD_CONDJNITIALIZER,
NULL, // adresse du pthread associé NULL, // associated pthread address
0 // ihread_pid 0 // ihread_pid
},  }
/* Task[3] = */{ / * Task [3] = * / {
4, // id  4, // id
5.78e-1, //wcet 5.78e-1, // wcet
1.81Θ-1, /bcet  1.81Θ-1, / bcet
3.27Θ-1, //aet  3.27Θ-1, // aet
0, //ret  0, // ret
40.0e-1, //deadiine  40.0e-1, // deadiine
0, // next deadline  0, // next deadline
-1 , // cpu  -1, // cpu
40.0e-1, //period 40.0e-1, // period
0, // rsext_period  0, // rsext_period
-1, // beg intime  -1, // beg intime
-1 , // endtime  -1, // endtime
0, // préemption  0, // preemption
0, // preemptdelay 0.0, // lastpreempt_time 0, // preemptdelay 0.0, // last " preempt_time
0.0, // total_preempt_duration  0.0, // total_preempt_duration
unexisting, // state  unexisting, // state
PTHREADJvlUTEXJNlTIALIZER,  PTHREADJvlUTEXJNlTIALIZER,
PTHREAD_COND_INITiAUZER,  PTHREAD_COND_INITiAUZER,
NULL, // adresse du pthread associé  NULL, // associated pthread address
0 // thread_pid  0 // thread_pid
} };  }};
// MECANISME UTILISE POUR PREEMPTER UNE TACHE // MECHANISM USED TO PREPARE A TASK
void sigusrl (int dummy) void sigusrl (int dummy)
{  {
int i;  int i;
pidj pid;  pidj pid;
// Récupérer le pid du thread courant  // Get the current thread's pid
pid = (long)syscal!(SYS_gettid);  pid = (long) syscal! (SYS_gettid);
// Comparer le pid du thread courant avec celui de tous les threads de l'application // Compare the pid of the current thread with that of all the threads of the application
// afin d'identifier le thread à suspendre. On exploite ici le fait que le signal handier (sigusrl ) // s'exécute avec le pid du thread qui l'a reçu,  // to identify the thread to suspend. We exploit here the fact that the signal handier (sigusrl) // executes with the pid of the thread which received it,
for {F0; i<NUM_THREADS; i++) {  for {F0; i <NUM_THREADS; i ++) {
if (Task[iJ.thread_pid == pid) {  if (Task [iJ.thread_pid == pid) {
Task[i].state = ready;  Task [i] .state = ready;
pthread_cond_wait{&Task[i].cond_resume, &Task{iî.mut_waiî);  pthread_cond_wait {& Task [i] .cond_resume, & Task {iî.mut_waiî);
// POUR REDEMMARRER LA TACHE: pthread_cond_broadcast(&Task[i].cond__resume) // TO REDEMPT THE JOB: pthread_cond_broadcast (& Task [i] .cond__resume)
TaskfiJ.staie = running; TaskfiJ.staie = running;
}  }
}  }
// DECLARATION DES TACHES APPLICATIVES, AVEC LEURS EVENEMENTS DE TACHE // DECLARATION OF APPLICATIVE TASKS, WITH THEIR TASK EVENTS
// - OIMACTIVATE AU DEBUT DE L4EXECUTION // - OIMACTIVATE AT THE BEGINNING OF THE EXECUTION
// - ONOFFSET: PAS UTILISE // - ONOFFSET: NOT USED
// - ONBLOCK: A LA FIN DE L'EXECUTION EFFECTIVE D'UNE TACHE  // - ONBLOCK: AT THE END OF THE EFFECTIVE EXECUTION OF A TASK
// - ONUNBLOCK: LORSQU'UNE TACHE ATTEINT SA PERIOD  // - ONUNBLOCK: WHEN A TASK REACHES ITS PERIOD
// - ONTERMINATE: LORSQU'UNE TACHE EST COMPLETEMENT TERMINEE  // - ONTERMINATE: WHEN A TASK IS COMPLETELY COMPLETE
// CHAQUE TACHE CORRESPOND A UNE FONCTION, QUI SERA ENCAPSULEE DANS UN THREAD // POSIX PAR LA SUITE: // EVERY TASK MATCHES A FUNCTION, WHICH WILL BE ENCAPSULATED IN A THREAD // POSIX FOLLOWING:
void *task0{void * arg) void * task0 {void * arg)
{  {
int i; identifiant de tâche requis par l'ordonnanceur */ TaskjO].thread_pid = (long)syscall(SYS_gettid); onActivate(&Task[0]); int i; task identifier required by the scheduler * / TaskjO] .thread_pid = (long) syscall (SYS_gettid); OnActivate (& Task [0]);
do  do
{ if (usertask_actualexec(&Task[0]) == -1 ) { puts("ERROR : usertask_actualexec"); pthread_exit(0);  {if (usertask_actualexec (& Task [0]) == -1) {puts ("ERROR: usertask_actualexec"); pthread_exit (0);
}  }
onBlock(&Task[0]);  onBlock (& Task [0]);
if (usertask_sleepexec(&Task[0]) == -1 ) { puts("ERROR : usertask_sleepexec"); pthread_exit{0);  if (usertask_sleepexec (& Task [0]) == -1) {puts ("ERROR: usertask_sleepexec"); pthread_exit {0);
}  }
onUnBlock(&Task{0]);  onUnBlock (& Task {0]);
} whiie (sched_time<simulation_time);  } whiie (sched_time <simulation_time);
onTerminate(&Task{0]);  OnTerminate (& Task {0]);
pthread_exit{0);  pthread_exit {0);
} }
void *iask1 (void * arg) void * iask1 (void * arg)
{ {
int i;  int i;
Γ Identifiant de tâche requis par l'ordonnanceur */ Task[1 ].threadj)id = (tong)syscall(SYS_gettid); onActivate(&Task[1 ]); Γ Job ID required by the scheduler * / Task [1] .threadj) id = (tong) syscall (SYS_gettid); onActivate (& Task [1]);
do  do
{ if (usertask_actualexec{&Task[1]) == -1 ) { putsfERROR : usertask_actualexec"); pthread_exit(0); } {if (usertask_actualexec {& Task [1]) == -1) {putsfERROR: usertask_actualexec "); pthread_exit (0); }
onBlock(&Task[1]);  onBlock (Task &[1]);
if {usertask_sleepexec{&Task[1]) == -1) { puts{"ERROR : usertask_sleepexec"); pthread_exit(0);  if {usertask_sleepexec {& Task [1]) == -1) {puts {"ERROR: usertask_sleepexec"); pthread_exit (0);
}  }
onUnBlock(&Task[13);  onUnBlock (& Task [13);
} while (sched_time<simulation_time);  } while (sched_time <simulation_time);
onTerminate(&Task[1 ]);  onTerminate (& Task [1]);
pthread_exit(0);  pthread_exit (0);
} void *task2(void * arg) } void * task2 (void * arg)
{ {
int i;  int i;
/* Identifiant de tâche requis par l'ordonnanceur */ Task[2J.threadpid = (tong)syscall(SYS_gettid); onActivate(&Task[2J); / * Task ID required by the scheduler * / Task [2J.thread " pid = (tong) syscall (SYS_gettid); OnActivate (& Task [2J);
do  do
{ if (usertask_actua!exec(&Task[2]) == -1) { puts{"ERROR : usertask_aciualexec"); pthread_exit(0);  {if (usertask_actua! exec (& Task [2]) == -1) {puts {"ERROR: usertask_aciualexec"); pthread_exit (0);
}  }
onBlock(&Task[2]);  onBlock (& Task [2]);
if (usertask_sleepexec{&Task[2]) == -1 ) { putsfERROR : usertask_sieepexec"); pthread_exit(0);  if (usertask_sleepexec {& Task [2]) == -1) {putsfERROR: usertask_sieepexec "); pthread_exit (0);
}  }
onUnBlock(&Task[2]);  onUnBlock (& Task [2]);
} while (sched__time<simu!aÎion_time);  } while (sched__time <simulation_time_time);
onTerminate(&Task[2]);  OnTerminate (& Task [2]);
pthreadmexii(0); pthread m exii (0);
} void *task3(void * arg) } void * task3 (void * arg)
{ {
int i; /* identifiant de tâche requis par l'ordonnanceur */ int i; / * task ID required by the scheduler * /
Task[3].thread_pid = (long)syscall{SYS_gettid);  Task [3] .thread_pid = (long) syscall {SYS_gettid);
onActivate(&Task[3]);  OnActivate (& Task [3]);
do  do
{ if (usertask_actuaiexec(&Task[3]) == -1 ) {  {if (usertask_actuaiexec (& Task [3]) == -1) {
putsfERROR : usertask_actualexec");  putsfERROR: usertask_actualexec ");
pthread_exit(0);  pthread_exit (0);
}  }
onBlock{&Task[3]);  onBlock {& Task [3]);
if (usertask_sleepexec(&Task[3]) «- -1 ) puts("ERROR : usertask_sleepexec");  if (usertask_sleepexec (& Task [3]) "- -1) puts (" ERROR: usertask_sleepexec ");
pt read_exit(0);  pt read_exit (0);
}  }
onUnBlock(&Task[3]);  onUnBlock (& Task [3]);
} whiie (sched_time<simulation_time);  } whiie (sched_time <simulation_time);
onTerminate(&Task[3]);  OnTerminate (& Task [3]);
pthread_exit(0);  pthread_exit (0);
rnt main {) rnt hand {)
{ {
int i, n;  int i, n;
void *ret;  void * ret;
pthreadj Thread[NUM_THRt=ADS]; simuiaiionjime = 80e~1 ;  pthreadj Thread [NUM_THRt = ADS]; simuiaiionjime = 80e ~ 1;
// INITIALISATIONS RELATIVES AUX PROCESSEURS init_CPUs();  // INITIALIZATIONS RELATED TO PROCESSORS init_CPUs ();
// INITIALISATIONS RELATIVES AUX TACHES APPLICATIVES init_TASKs(Task, Thread);  // INITIALIZATIONS RELATING TO APPLICATIVE TASKS init_TASKs (Task, Thread);
// Initiaiisaiion du gestionnaire de signal sigusrl // Initiaiisaiion of the signal manager sigusrl
if (signaUSIGUSRI , sigusrl ) == SIG_ERR) {  if (signaUSIGUSRI, sigusrl) == SIG_ERR) {
perror("signaln); perror ("signal n );
exit(1 );  exit (1);
} printf("\nTask \t WCET \t\t Deadline \t Period \t Offset\n"); } printf ("\ nTask \ t WCET \ t \ t Deadline \ t Period \ t Offset \ n");
// CREATION DES THREADS POSIX // CREATING POSIX THREADS
if(pthread_create(&Thread[OJ, NULL, taskO, "0") < 0) if (pthread_create (& Thread [OJ, NULL, taskO, "0") <0)
{  {
printf{"pthread_create: error creating thread 1\n");  printf {"pthread_create: error creating thread 1 \ n");
exit(1);  exit (1);
}  }
if(pthreadcreate(&Thread[1 ], NULL, taskl , Ί ") < 0) if (pthread " create (& Thread [1], NULL, taskl, Ί") <0)
{  {
printf{"pthread_create: error creating thread 2\n");  printf {"pthread_create: error creating thread 2 \ n");
exit(1 );  exit (1);
}  }
if(pthread_create(&Thread[2], NULL, task2, "2") < 0) if (pthread_create (& Thread [2], NULL, task2, "2") <0)
{  {
printf{"pthread_create: error creating thread 3\n");  printf {"pthread_create: error creating thread 3 \ n");
exit(1 );  exit (1);
}  }
if(pthread_create(&Thread[3], NULL, task3, "3") < 0) if (pthread_create (& Thread [3], NULL, task3, "3") <0)
{  {
printf("pthread_create: error creating thread 4\n");  printf ("pthread_create: error creating thread 4 \ n");
exit(1 );  exit (1);
}  }
Γ SYNCHRONISATION: ATTENDRE LA FIN EFFECTIVE DE CREATION Γ SYNCHRONIZATION: WAIT FOR THE EFFECTIVE END OF CREATION
DE TOUS LES THREADS AVANT DE LANCER START_SCHED  FROM ALL THREADS BEFORE LAUNCHING START_SCHED
while (rdv != NUMTHREADS) { } start_sched(); while (rdv! = NUM " THREADS) {} start_sched ();
(void)pthreadJoin(Thread[0],&rei); (Void) pthreadJoin (Thread [0], &rei);
(void)pthreadJoin(Thread[1 ],&ret); (void) pthreadJoin (Thread [1], &ret);
(void)pthread_join(Thread[2],&ret); (Void) pthread_join (Thread [2], &ret);
(void)pthreadJoin(Thread[3],&rei); (Void) pthreadJoin (Thread [3], and rei);
// FERMETURE DES FICHIERS DE CHANGEMENT DE FREQUENCE // CLOSE THE FREQUENCY CHANGE FILES
exst_CPUs(); exst_CPUs ();
// MESURE DU TEMPS DE TRAITEMENT DE L'ORDONNANCEUR (MOYEN PAR APPEL, TOTAL) double average__sched_duration; double total_sched_duration = 0; // MEASURE PROCESSOR TIME (AVERAGE PER CALL, TOTAL) double average__sched_duration; double total_sched_duration = 0;
prinîf("NUMBER OF SCHEDULER CALLS: %d\n", nb_sched_call);  prinîf ("NUMBER OF SCHEDULER CALLS:% d \ n", nb_sched_call);
for (i=0; i<nbmsched_cal!; {totai_sched_duration += sched_du ration [ij; printf("%f ", sched_duration[i]); } for (i = 0; i <nb m sched_cal !; {totai_sched_duration + = sched_of the ration [ij; printf ("% f", sched_duration [i]);}
printf("\nTOTAL SCHEDULER CALL DURATION: %f n", total_sched_duration);  printf ("\ nTOTAL SCHEDULER CALL DURATION:% f n", total_sched_duration);
average_sched__duration = total_sched_duration / nb_sched_call;  average_sched__duration = total_sched_duration / nb_sched_call;
prinîfC'AVERAGE SCHEDULER CALL DURATION: %f n", average_sched_duration); return 0;  FIRST SCHEDULER CALL DURATION:% f n ", average_sched_duration); return 0;
2. pm_drivers 2. pm_drivers
in tel__core_i5_m 520. h in such_core_i5_m 520. h
* intel_core_i5_m520.h * intel_core_i5_m520.h
7 7
#ifndef INTEL_CORE_I5_M520 #ifndef INTEL_CORE_I5_M520
#define !NTEL_CORE_I5_M520 #define! NTEL_CORE_I5_M520
#include "pm_typedef.h" #include "pm_typedef.h"
#define NB_CPU_MAX 4 #define NB_CPU_MAX 4
Γ fréquences disponibles Γ available frequencies
/* IMPORTANT: liste des frequencies disponibles en ordre décroissant 2400000 2399000 2266000 2133000 1999000 1866000 1733000 1599000 1466000 1333000 1199000 */ / * IMPORTANT: list of available frequencies in descending order 2400000 2399000 2266000 2133000 1999000 1866000 1733000 1599000 1466000 1333000 1199000 * /
/*definition des états possible de CPU07 / * definition of possible states of CPU07
#define NB_STATES_CPU 1 #ifdef TARGET_ CPU_DEFINED #define NB_STATES_CPU 1 #ifdef TARGET_ CPU_DEFINED
Elementary_state CPU_STATE[NB_CPU_MAX][NB__STATES_CPU];  Elementary_state CPU_STATE [NB_CPU_MAX] [NB__STATES_CPU];
#else Bementary_state _CPU_STATE[NB_CPU_MAX]INB_STATES_CPU] #else Bementary_state _CPU_STATE [NB_CPU_MAX] INB_STATES_CPU]
freq/elementary_state_id/core_id freq / elementary_state_id / core_id
{ 2400000, 0, 0 }, // CPU0[0]  {2400000, 0, 0}, // CPU0 [0]
{ 2399000, 1,0}, // CPU0[1]  {2399000, 1.0}, // CPU0 [1]
{ 2266000, 2, 0 }, // CPU0[2]  {2266000, 2, 0}, // CPU0 [2]
{2133000, 3,0}, // CPU0[3]  {2133000, 3.0}, // CPU0 [3]
{1999000, 4,0}, // CPU0[4]  {1999000, 4.0}, // CPU0 [4]
{1866000, 5, 0}, // CPU0[5]  {1866000, 5, 0}, // CPU0 [5]
{ 1733000, 6, 0 }, // CPU0[6]  {1733000, 6, 0}, // CPU0 [6]
{ 1599000, 7, 0 }, // CPU0[7]  {1599000, 7, 0}, // CPU0 [7]
{1466000, 8, 0}, // CPU0[8]  {1466000, 8, 0}, // CPU0 [8]
{ 1333000, 9, 0 }, // CPU0[9]  {1333000, 9, 0}, // CPU0 [9]
{1199000, 10,0}}, // CPU0[10]  {1199000, 10.0}}, // CPU0 [10]
{ // fréquence / numéro de l'état élémentaire (fréquence) concerné{// frequency / number of the elementary state (frequency) concerned
// /numéro du CPU concerné— Ces champs sont définis dans // / CPU number concerned - These fields are defined in
// pm_typedf.h  // pm_typedf.h
{ 2400000, 0, 1 }, // CPU1[0]  {2400000, 0, 1}, // CPU1 [0]
{2399000, 1, 1 }, //CPU1[1]  {2399000, 1, 1}, // CPU1 [1]
{ 2266000, 2, 1 }, //CPU1[2]  {2266000, 2, 1}, // CPU1 [2]
{2133000, 3, 1 }, //CPU1[3]  {2133000, 3, 1}, // CPU1 [3]
{ 1999000, 4, 1 }, //CPU1[4J  {1999000, 4, 1}, // CPU1 [4J
{ 1866000, 5, 1 }, //CPU1[5]  {1866000, 5, 1}, // CPU1 [5]
{ 1733000, 6, 1 }, //CPU1[6]  {1733000, 6, 1}, // CPU1 [6]
{1599000, 7, 1 }, // CPU1(7]  {1599000, 7, 1}, // CPU1 (7)
{1466000, 8, 1 }, //CPU1[8]  {1466000, 8, 1}, // CPU1 [8]
{ 1333000, 9, 1 }, //CPU1{9]  {1333000, 9, 1}, // CPU1 {9}
{ 1199000, 10, 1 }}, //CPU1[10]  {1199000, 10, 1}}, // CPU1 [10]
// fréquence / numéro de l'état élémentaire (fréquence) concerné// frequency / number of the elementary state (frequency) concerned
// /numéro du CPU concerné // / number of the CPU concerned
{ 2400000, 0, 2 }, // CPU2(0]  {2400000, 0, 2}, // CPU2 (0)
{2399000, 1,2}, // CPU2I1]  {2399000, 1,2}, // CPU2I1]
{ 2266000, 2, 2 }, // CPU2[2]  {2266000, 2, 2}, // CPU2 [2]
{2133000, 3,2}, // CPU2Î3]  {2133000, 3.2}, // CPU2Î3]
{1999000, 4,2}, // CPU2[4]  {1999000, 4.2}, // CPU2 [4]
{1866000, 5,2}, // CPU2[5]  {1866000, 5.2}, // CPU2 [5]
{1733000, 6,2}, // CPU2[6]  {1733000, 6.2}, // CPU2 [6]
{1599000, 7,2}, // CPU2[7] { 1466000,8,2}, // CPU2[8] {1599000, 7.2}, // CPU2 [7] {1466000,8,2}, // CPU2 [8]
{ 1333000, 9, 2 }, // CPU2[9]  {1333000, 9, 2}, // CPU2 [9]
{ 1199000, 10, 2 }}, // CPU2[10]  {1199000, 10, 2}}, // CPU2 [10]
// fréquence / numéro de l'état élémentaire (fréquence) concerné// frequency / number of the elementary state (frequency) concerned
/numéro du CPU concerné / number of the CPU concerned
{ 2400000, 0, 3 }, // CPU3[0]  {2400000, 0, 3}, // CPU3 [0]
{2399000, 1, 3}, //CPU3[1]  {2399000, 1, 3}, // CPU3 [1]
{ 2266000, 2, 3 }, // CPU3[2]  {2266000, 2, 3}, // CPU3 [2]
{2133000, 3,3}, // CPU3[3]  {2133000, 3.3}, // CPU3 [3]
{1999000, 4,3}, // CPU3[4]  {1999000, 4.3}, // CPU3 [4]
{ 1866000, 5, 3}, // CPU3[5]  {1866000, 5, 3}, // CPU3 [5]
{1733000, 6,3}, // CPU3[6]  {1733000, 6.3}, // CPU3 [6]
{ 599000, 7, 3}, il CPU3[7]  {599000, 7, 3}, it CPU3 [7]
{ 1466000, 8, 3 }, // CPU3[8]  {1466000, 8, 3}, // CPU3 [8]
{ 1333000, 9, 3}, // CPU3[9]  {1333000, 9, 3}, // CPU3 [9]
{1199000, 10,3}} //CPU3[10]  {1199000, 10.3}} // CPU3 [10]
#endif #endif
#endif #endif
pm typedef.h pm typedef.h
#ifndef PM TYPEDEF  #ifndef PM TYPEDEF
#define PM_TYPEDEF #define PM_TYPEDEF
/*— Structures de données— */ typedef unsigned long long Core_freq; typedef struct Elemeniary_state { / * - Data Structures- * / typedef unsigned long long Core_freq; typedef struct Elemeniary_state {
Core_freq freq;  Core_freq freq;
unsigned long elementary statejd;  unsigned long elementary statejd;
unsigned long corejd;  unsigned long corejd;
} Elementary__state; typedef struct Global_state { } Elementary__state; typedef struct Global_state {
Elementary_state* state; unsigned int nb values; Elementary_state * state; unsigned int nb values;
f!oat curr;  f! oat curr;
float voit;  float sees;
} Globai_state; typedef struct } Globai_state; typedef struct
Global_state_and_transition {  Global_state_and_transition {
float delay;  float delay;
float energy;  float energy;
Global_state target;  Global_state target;
} Globai_state_and_transition;  } Globai_state_and_transition;
typedef struct Global_states_and_Jransitiorts { typedef struct Global_states_and_Jransitiorts {
Global_state_and_transition* values;  Global_state_and_transition * values;
unsigned long nb values;  unsigned long nb values;
} Global_states_andJransitions; typedef struct Corejds { } Global_states_andJransitions; typedef struct Corejds {
unsigned long* values;  unsigned long * values;
unsigned long nb_yalues;  unsigned long nb_yalues;
} Corejds; } Corejds;
/*— Opérations d'interface— */ / * - Interface Operations- * /
/* / *
Co re jds* CP U_state_m a nager_q u ery_core_i ds() ;  Co re jds * CP U_state_m to swim_q u ery_core_i ds ();
Global_siates_and_transitions* CPU_statejïianager jueryjiext__.possible_states(); void CPU_state_manager_apply_staie{ Globai_state* s ); Global_siates_and_transitions * CPU_statejianager jueryjiext __. Possible_states (); void CPU_state_manager_apply_staie {Globai_state * s);
#endif #endif
pm_cpufreq.c pm_cpufreq.c
!* ! *
* pm_cpufreq.c * pm_cpufreq.c
* fonctions pour changer la fréquence des processeurs utilisant cpufreq  * functions to change the frequency of processors using cpufreq
7 7
#include <stdio.h> #inc!ude <string.h> #include <stdio.h> #inc! ude <string.h>
#inciude <fcntt.h> #inciude <fcntt.h>
#include "pm_typedef.h" #include "pm_typedef.h"
#define DEBUG #define DEBUG
#define CPUFREQPATH "/sys/devices/system/cpu/cpu" #define CPUFREQPATH "/ sys / devices / system / cpu / cpu"
#define GOVPATH "/cpufreq/scaling^governor" #define GOVPATH "/ cpufreq / scaling ^ governor"
#define FREQPATH "/cpufreq/scaling_setspeed" #define FREQPATH "/ cpufreq / scaling_setspeed"
FILE *setspeed_cpu[4]; void itoa (int n, char *u) { FILE * setspeed_cpu [4]; void itoa (int n, char * u) {
int i=0J;  int i = 0J;
char s[17]; do {  tanks [17]; do {
s[i++]=(char)( n%10 + 48 );  s [i ++] = (char) (n% 10 + 48);
n-=n%10;  n = n% 10;
} while ((n/=10)>0); for {p0;j<i;j++)  } while ((n / = 10)> 0); for {p0; j <i; j ++)
u[i-1 -j]=s[j]; u[j]='\0';  u [i-1 -j] = s [j]; u [j] = '\ 0';
} void set_governor(int cpu.char * ¾ι °v) { } void set_governor (int cpu.char * ¾ ° v) {
F ( LE *cpu_governor; F (LE * cpu_governor;
char cpu_gov filename  char cpu_gov filename
// initialisation du nom de fichier complet (incluant le chemin) pour le gouverneur du coeur numéro// initialization of the complete file name (including the path) for the number governor
'cpu' char cpu_gov_sÎr[17J; 'cpu' char cpu_gov_sir [17J;
tioa{cpu, cpu_gov_str);  tioa {cpu, cpu_gov_str);
strcpy(cpu_gov_filename, CPUFREQPATH);  strcpy (cpu_gov_filename, CPUFREQPATH);
sircat(cpu_gov_ftlename, cpu_gov__str);  sircat (cpu_gov_ftlename, cpu_gov__str);
strcat(cpu_gov_f i lena me, GO VPAT H); char strbufOI [20]; strcat (cpu_gov_f i lena me, GO VPAT H); char strbufOI [20];
char strbuf02[20]; // AFFICHE LE GOUVERNEUR CPU COURANT  char strbuf02 [20]; // POSTER THE CURRENT CPU GOVERNOR
cpu_governor = fopen{cpu_gov_filename, "r");  cpu_governor = fopen {cpu_gov_filename, "r");
fscanf(cpu_governor, "%s", strbufOI);  fscanf (cpu_governor, "% s", strbufOI);
#ifdef DEBUG  #ifdef DEBUG
printf("For CPU%d\t",cpu);  printf ("For CPU% d \ t", cpu);
prinif{"CPU%d current governor:%s\t", cpu, strbufOI };  prinif {"CPU% d current governor:% s \ t", cpu, strbufOI};
#endif  #endif
fclose(cpu__governor);  fclose (cpu__governor);
// MODIFIE LE GOUVERNEUR CPU COURANT  // AMEND THE CURRENT CPU GOVERNOR
cpu_governor = fopen(cpu_gov_filename, "w");  cpu_governor = fopen (cpu_gov_filename, "w");
fprintf(cpu_governor, "%s", gov);  fprintf (cpu_governor, "% s", gov);
fclose(cpu_governor);  fclose (cpu_governor);
// lAFFICHE LE NOUVEAU GOUVERNEUR CPU COURANT {POUR VERIFICATION) cpu_governor = fopen{cpu_gov_filename, "r"); // INSERT NEW CURRENT GOVERNOR {FOR VERIFICATION} cpu_governor = fopen {cpu_gov_filename, "r");
fscanf(cpu_govemor, "%s", strbuf02);  fscanf (cpu_govemor, "% s", strbuf02);
#ifdef DEBUG  #ifdef DEBUG
printf("->\iCPU%d new governor:%s\n", cpu, strbuf02);  printf ("-> \ iCPU% d new governor:% s \ n", cpu, strbuf02);
#endif  #endif
fclose(cpu_governor);  fclose (cpu_governor);
void open_cpufreq(int cpu) { void open_cpufreq (int cpu) {
char cpu_freq_filename[60];  char cpu_freq_filename [60];
#ifdef DEBUG #ifdef DEBUG
printf('Opening\t\t\t\t\t/sys/devices/system/cpu/cpu%d/cpufreq/scaling_setspeed\n'', cpu); #endif  printf ('Opening \ t \ t \ t \ t \ t / sys / devices / system / cpu / cpu% d / cpufreq / scaling_setspeed \ n' ', cpu); #endif
// initialisation du nom de fichier complet {incluant le chemin)  // initialization of the complete file name {including the path)
// pour le fichier de spécification de fréquence (scaling_setspeed) du coeur numéro 'cpu' char cpu_freq_str[17J;  // for the frequency specification file (scaling_setspeed) of the core number 'cpu' char cpu_freq_str [17J;
itoa(cpu, cpu_jfreq_str);  itoa (cpu, cpu_jfreq_str);
strcpy(cpujreqjilename, CPUFREQPATH);  strcpy (cpujreqjilename, CPUFREQPATH);
strcat(cpu_freq_filename, cpu_freq_str);  strcat (cpu_freq_filename, cpu_freq_str);
strcat(cpu_freq_filename, FREQPATH);  strcat (cpu_freq_filename, FREQPATH);
setspeed_cpu[cpu] = fopen(cpu_freq_filenameI "r+w"); > void CPU_state__rnanager_apply_state{ int cpu, E[ementary_state *s) { // FILE *setspeed_cpu; setspeed_cpu [cpu] = fopen (cpu_freq_filename I "r + w"); > void CPU_state__rnanager_apply_state {int cpu, E [ementary_state * s) {// FILE * setspeed_cpu;
int freqO, freql ;  int freqO, freql;
// AFFICHE LA FREQUENCE COURANTE DE CPU // POST THE CURRENT CPU FREQUENCY
fscanf(setspeed_cpu[cpu], "%d", &freqO);  fscanf (setspeed_cpu [cpu], "% d", &freqO);
fseek(setspeed_cpu{cpu], 0, SEEK_SET);  fseek (setspeed_cpu {cpu], 0, SEEK_SET);
#ifdef DEBUG  #ifdef DEBUG
printf{"CPU%d CURRENT FREQUENCY:%d\t",cpu,freqO);  printf {"CPU% d CURRENT FREQUENCY:% d \ t", cpu, freqO);
#endif fprintf(setspeed_cpu[cpu], "%llu", s->freq);  #endif fprintf (setspeed_cpu [cpu], "% llu", s-> freq);
fseek(setspeed__cpu[cpu], 0, SEEKJ3ET);  fseek (setspeed__cpu [cpu], 0, SEEKJ3ET);
freql =-1 ; freql = -1;
fscanf(setspeed_cpu[cpuî, "%d", &freq1 );  fscanf (setspeed_cpu [cpuî, "% d", &freq1);
fseek(setspeed_cpu[cpu], 0, SEEKJ3ET);  fseek (setspeed_cpu [cpu], 0, SEEKJ3ET);
#ifdef DEBUG  #ifdef DEBUG
printf("->\tCPU%d NEW FREQUENCY:%d\n",cpu, freql);  printf ("-> \ tCPU% d NEW FREQUENCY:% d \ n", cpu, freql);
#endif  #endif
} int CPU_statejîianager_query_current_freq{int cpu) { } int CPU_statejianager_query_current_freq {int cpu) {
int freqO; fscanf(setspeed_cpu[cpu], "%d", &freqO);  int freqO; fscanf (setspeed_cpu [cpu], "% d", &freqO);
fseek{setspeed_cpu[cpu], 0, SEEK_SET);  fseek {setspeed_cpu [cpu], 0, SEEK_SET);
return freqO;  return freqO;
} void close_cpufreq(int cpu) {  } void close_cpufreq (int cpu) {
#ifdef DEBUG  #ifdef DEBUG
printf('Olossng\t\i\t\t/sys/devices/system/cpu/cpu%d/cpufreq/scaling_setspeed\n", cpu); #endif  printf ('Olossng \ t \ i \ t \ t / sys / devices / system / cpu / cpu% d / cpufreq / scaling_setspeed \ n ", cpu); #endif
f close(s etspeed__cpu [cpu ] ) ;  f close (s andspeed__cpu [cpu]);
} }

Claims

REVENDICATIONS
1 . Procédé d'ordonnancement de tâches avec contraintes d'échéances, basé sur un modèle de tâches périodiques indépendantes et réalisé en espace utilisateur, dans lequel : 1. A task scheduling method with time constraints, based on a model of independent periodic tasks and realized in user space, in which:
• chaque tâche à ordonnancer est associée à une structure de données, définie en espace utilisateur et contenant au moins une information temporelle et une information indicative d'un état d'activité de la tâche, ledit état d'activité étant choisi dans une liste comprenant au moins :  Each task to be scheduled is associated with a data structure, defined in user space and containing at least one temporal information and information indicative of a state of activity of the task, said activity state being chosen from a list comprising at least :
- un état de tâche en exécution ;  - a task state in execution;
un état de tâche en attente de la fin de sa période d'exécution ; et  a task state waiting for the end of its execution period; and
un état de tâche prête à être exécutée, en attente d'une condition de reprise;  a task state ready for execution, waiting for a resume condition;
· au cours de son exécution, chaque tâche modifie ladite information indicative de son état d'activité et le cas échéant, en fonction d'une politique d'ordonnancement prédéfinie, appelle un ordonnanceur qui est exécuté en espace utilisateur ;  · During its execution, each task modifies said information indicative of its activity state and, where appropriate, according to a predefined scheduling policy, calls a scheduler which is executed in user space;
• à chaque appel, ledit ordonnanceur :  • at each call, said scheduler:
- établit une file d'attente des tâches prêtes à être exécutées, en attente d'une condition de reprise;  - establishes a queue of tasks that are ready for execution, waiting for a recovery condition;
trie ladite file d'attente en fonction d'un critère de priorité prédéfini ;  sorts said queue according to a predefined priority criterion;
si nécessaire, préempte une tâche en exécution en lui envoyant un signal la forçant à passer dans ledit état de tâche prête à être exécutée, en attente d'une condition de reprise ; et  if necessary, preempt an executing task by sending a signal forcing it to pass to said job state ready to execute, waiting for a resume condition; and
envoie ladite condition de reprise au moins à la tâche se trouvant en tête de ladite file d'attente.  sends said resume condition at least to the task at the head of said queue.
2. Procédé d'ordonnancement selon la revendication 1 dans lequel ladite politique d'ordonnancement est une politique préemptive, telle que EDF, RM, DM ou LLF. 2. scheduling method according to claim 1 wherein said scheduling policy is a preemptive policy, such as EDF, RM, DM or LLF.
3. Procédé d'ordonnancement selon l'une des revendications précédentes, mis en oeuvre dans une plateforme multiprocesseur, dans lequel ladite structure de données comprend également une information relative à un processeur auquel la tâche correspondante est affectée, et dans lequel ledit ordonnanceur affecte à un processeur du système chaque tâche prête à être exécutée. The scheduling method according to one of the preceding claims, implemented in a multiprocessor platform, wherein said data structure also comprises information relating to a processor to which the corresponding task is assigned, and wherein said scheduler assigns to a system processor every task ready to be executed.
4. Procédé d'ordonnancement selon l'une des revendications précédentes, dans lequel ledit ordonnanceur modifie la fréquence d'horloge et ia tension d'alimentation du ou d'au moins un processeur en fonction d'une politique DVFS. 4. Scheduling method according to one of the preceding claims, wherein said scheduler modifies the clock frequency and the supply voltage of the or at least one processor according to a DVFS policy.
5. Procédé d'ordonnancement selon l'une des revendications précédentes comportant une étape d'initialisation, au cours de iaquelîe : 5. A scheduling method according to one of the preceding claims including an initialization step, during which:
- les tâches à ordonnancer sont créées, affectées à un même processeur et placées dans un état d'attente d'une condition de reprise, une variable globale dite de rendez-vous étant incrémentée ou décrémentée lors de la création de chaque dite tâche ;  the tasks to be scheduled are created, allocated to the same processor and placed in a waiting state of a recovery condition, a global variable called an appointment being incremented or decremented during the creation of each said task;
- lorsque ladite variable de rendez-vous prend une valeur prédéfinie indiquant que toutes les tâches ont été créées, ledit ordonnanceur est exécuté pour la première fois.  when said appointment variable takes a predefined value indicating that all the tasks have been created, said scheduler is executed for the first time.
6. Procédé d'ordonnancement selon î'une des revendications précédentes, dans lequel ladite structure de données contient également des informations indicatives d'un thread associé à ladite tâche et à son temps d'exécution dans le pire cas. The scheduling method according to one of the preceding claims, wherein said data structure also contains information indicative of a thread associated with said task and its worst execution time.
7. Procédé d'ordonnancement selon l'une des revendications précédentes, exécuté sous un système d'exploitation compatible avec une norme POSiX. 7. scheduling method according to one of the preceding claims, executed under an operating system compatible with a POSiX standard.
8. Procédé d'ordonnancement selon ia revendication 7, dans lequel ledit système d'exploitation est un système Linux. The scheduling method according to claim 7, wherein said operating system is a Linux system.
9. Procédé d'ordonnancement selon l'une des revendications 7 ou 8 dans lequel, à chaque appel de l'ordonnanceur, un «pthread» est créé pour assurer son exécution. 9. The scheduling method according to one of claims 7 or 8 wherein, at each call of the scheduler, a "pthread" is created to ensure its execution.
10. Procédé d'ordonnancement selon l'une des revendications 7 à 9, comportant l'utilisation d'un MUTEX pour assurer l'exécution d'une seule instance à la fois de l'ordonnanceur. 10. scheduling method according to one of claims 7 to 9, comprising the use of a MUTEX to ensure the execution of a single instance of both the scheduler.
1 1 . Procédé d'ordonnancement selon l'une des revendications 9 ou 10 lorsqu'elle dépend des revendications 3 et 8, dans lequel l'affectation d'une tâche à un processeur est effectuée au moyen de ΑΡΙ CPU Affinity. 1 1. A scheduling method according to one of claims 9 or 10 when dependent on claims 3 and 8, wherein assignment of a task to a processor is performed by means of ΑΡΙ CPU Affinity.
12. Produit programme d'ordinateur pour la mise en oeuvre d'un procédé d'ordonnancement selon l'une des revendications précédentes. 12. Computer program product for implementing a scheduling method according to one of the preceding claims.
EP13792761.2A 2012-11-06 2013-11-05 Method for scheduling with deadline constraints, in particular in linux, carried out in user space Ceased EP2917834A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1260529A FR2997773B1 (en) 2012-11-06 2012-11-06 METHOD OF SCHEDULING WITH DELAY CONSTRAINTS, ESPECIALLY IN LINUX, REALIZED IN USER SPACE.
PCT/IB2013/059916 WO2014072904A1 (en) 2012-11-06 2013-11-05 Method for scheduling with deadline constraints, in particular in linux, carried out in user space

Publications (1)

Publication Number Publication Date
EP2917834A1 true EP2917834A1 (en) 2015-09-16

Family

ID=48128392

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13792761.2A Ceased EP2917834A1 (en) 2012-11-06 2013-11-05 Method for scheduling with deadline constraints, in particular in linux, carried out in user space

Country Status (4)

Country Link
US (1) US9582325B2 (en)
EP (1) EP2917834A1 (en)
FR (1) FR2997773B1 (en)
WO (1) WO2014072904A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870327A (en) * 2012-12-18 2014-06-18 华为技术有限公司 Real-time multitask scheduling method and device
WO2016064312A1 (en) * 2014-10-22 2016-04-28 Telefonaktiebolaget L M Ericsson (Publ) Coordinated scheduling between real-time processes
US10348621B2 (en) 2014-10-30 2019-07-09 AT&T Intellectual Property I. L. P. Universal customer premise equipment
KR102079499B1 (en) * 2015-10-20 2020-02-21 엘에스산전 주식회사 A method of independent control period allocation of axis in the PLC positioning system
GB2545435B (en) * 2015-12-15 2019-10-30 Advanced Risc Mach Ltd Data processing systems
DE102016200777A1 (en) * 2016-01-21 2017-07-27 Robert Bosch Gmbh Method and apparatus for monitoring and controlling quasi-parallel execution threads in an event-oriented operating system
US10210020B2 (en) * 2016-06-29 2019-02-19 International Business Machines Corporation Scheduling requests in an execution environment
US10289448B2 (en) 2016-09-06 2019-05-14 At&T Intellectual Property I, L.P. Background traffic management
US20200167191A1 (en) * 2018-11-26 2020-05-28 Advanced Micro Devices, Inc. Laxity-aware, dynamic priority variation at a processor
US11347544B1 (en) * 2019-09-26 2022-05-31 Facebook Technologies, Llc. Scheduling work items based on declarative constraints
CN114818570B (en) * 2022-03-11 2024-02-09 西北工业大学 Embedded system time sequence analysis method based on Monte Carlo simulation
CN114706602B (en) * 2022-04-01 2023-03-24 珠海读书郎软件科技有限公司 Android-based method for updating parameters of touch screen through app

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020073129A1 (en) * 2000-12-04 2002-06-13 Yu-Chung Wang Integrated multi-component scheduler for operating systems
WO2009158220A2 (en) * 2008-06-27 2009-12-30 Microsoft Corporation Protected mode scheduling of operations

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3938343B2 (en) * 2002-08-09 2007-06-27 インターナショナル・ビジネス・マシーンズ・コーポレーション Task management system, program, and control method
US8387052B2 (en) * 2005-03-14 2013-02-26 Qnx Software Systems Limited Adaptive partitioning for operating system
US20070074217A1 (en) * 2005-09-26 2007-03-29 Ryan Rakvic Scheduling optimizations for user-level threads
US8495662B2 (en) * 2008-08-11 2013-07-23 Hewlett-Packard Development Company, L.P. System and method for improving run-time performance of applications with multithreaded and single threaded routines
US8510744B2 (en) * 2009-02-24 2013-08-13 Siemens Product Lifecycle Management Software Inc. Using resource defining attributes to enhance thread scheduling in processors
WO2012005639A1 (en) * 2010-07-06 2012-01-12 Saab Ab Simulating and testing avionics
US9430281B2 (en) * 2010-12-16 2016-08-30 Advanced Micro Devices, Inc. Heterogeneous enqueuing and dequeuing mechanism for task scheduling
FR2969776B1 (en) * 2010-12-23 2013-01-11 Thales Sa METHOD FOR MANAGING THE ENERGY CONSUMPTION OF AN APPLICATION EXECUTABLE IN DIFFERENT ENVIRONMENTS AND SOFTWARE ARCHITECTURE USING SUCH A METHOD

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020073129A1 (en) * 2000-12-04 2002-06-13 Yu-Chung Wang Integrated multi-component scheduler for operating systems
WO2009158220A2 (en) * 2008-06-27 2009-12-30 Microsoft Corporation Protected mode scheduling of operations

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2014072904A1 *

Also Published As

Publication number Publication date
WO2014072904A1 (en) 2014-05-15
US20150293787A1 (en) 2015-10-15
FR2997773B1 (en) 2016-02-05
FR2997773A1 (en) 2014-05-09
US9582325B2 (en) 2017-02-28

Similar Documents

Publication Publication Date Title
EP2917834A1 (en) Method for scheduling with deadline constraints, in particular in linux, carried out in user space
Biondi et al. Achieving predictable multicore execution of automotive applications using the LET paradigm
Axer et al. Response-time analysis of parallel fork-join workloads with real-time constraints
Mraidha et al. Optimum: a marte-based methodology for schedulability analysis at early design stages
EP0536010A1 (en) Method and apparatus for the real-time control of a system comprising at least one processor capable of managing several tasks
Patel et al. Analytical enhancements and practical insights for MPCP with self-suspensions
Elliott Real-time scheduling for GPUS with applications in advanced automotive systems
Suzuki et al. Real-time ros extension on transparent cpu/gpu coordination mechanism
Abeni et al. Hierarchical scheduling of real-time tasks over Linux-based virtual machines
Sigrist et al. Mixed-criticality runtime mechanisms and evaluation on multicores
Li et al. Application execution time prediction for effective cpu provisioning in virtualization environment
Medina et al. Directed acyclic graph scheduling for mixed-criticality systems
Beckert et al. Zero-time communication for automotive multi-core systems under SPP scheduling
Prenzel et al. Real-time dynamic reconfiguration for IEC 61499
Kumar et al. A systematic survey of multiprocessor real-time scheduling and synchronization protocol
Ruaro et al. Dynamic real-time scheduler for large-scale MPSoCs
Buttazzo et al. Ptask: An educational C library for programming real-time systems on Linux
Elliott et al. Building a real-time multi-GPU platform: Robust real-time interrupt handling despite closedsource drivers
Osborne et al. Work in progress: Combining real time and multithreading
Syed et al. Online admission of non-preemptive aperiodic mixed-critical tasks in hierarchic schedules
Gu et al. Synthesis of real-time implementations from component-based software models
Kreiliger et al. Experiments for predictable execution of GPU kernels
Chen Fundamentals of Real-Time Systems
Inam et al. Mode-change mechanisms support for hierarchical freertos implementation
Wang et al. Unleashing the Power of Preemptive Priority-based Scheduling for Real-Time GPU Tasks

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150527

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20190531

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20211225