EP3807757A1 - Verfahren zur beschleunigung der ausführung eines einzelpfadprogramms durch parallele ausführung von bedingt konkurrierenden sequenzen - Google Patents

Verfahren zur beschleunigung der ausführung eines einzelpfadprogramms durch parallele ausführung von bedingt konkurrierenden sequenzen

Info

Publication number
EP3807757A1
EP3807757A1 EP19758795.9A EP19758795A EP3807757A1 EP 3807757 A1 EP3807757 A1 EP 3807757A1 EP 19758795 A EP19758795 A EP 19758795A EP 3807757 A1 EP3807757 A1 EP 3807757A1
Authority
EP
European Patent Office
Prior art keywords
sequence
execution
resource
program
satisfied
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19758795.9A
Other languages
English (en)
French (fr)
Inventor
Mathieu JAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Commissariat a lEnergie Atomique et aux Energies Alternatives CEA
Original Assignee
Commissariat a lEnergie Atomique CEA
Commissariat a lEnergie Atomique et aux Energies Alternatives CEA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Commissariat a lEnergie Atomique CEA, Commissariat a lEnergie Atomique et aux Energies Alternatives CEA filed Critical Commissariat a lEnergie Atomique CEA
Publication of EP3807757A1 publication Critical patent/EP3807757A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3851Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/22Microcontrol or microprogram arrangements
    • G06F9/28Enhancement of operational speed, e.g. by using several microcontrol devices operating in parallel

Definitions

  • the field of the invention is that of real-time computer systems for which the execution time of tasks, and in particular the execution time in the worst case (called WCET for "Worst Case Execution Time”), must be known to ensure validation and guarantee safety.
  • the invention aims more particularly to improve the accuracy of the estimation of the WCET of a program by making it possible to provide a guaranteed WCET without being too pessimistic.
  • Real-time systems must react reliably, which implies both being certain of the result produced by their programs but also knowing the time they take to execute.
  • the worst-case execution times are therefore fundamental data for the validation and safety of such real-time systems, and even more in the context of autonomous real-time systems (robotics, autonomous car, GPS) for which safety is essential.
  • the code transformation technique called “single-path” (or unique path in French) makes it possible to predict the execution time of a program and therefore to provide a reliable WCET.
  • the different code sequences which must be selectively executed according to the result of a conditional branching coming to examine input data (one can thus also speak of conditionally concurrent sequences or sequences of an alternative because constituting the possible choices of an alternative) are brought into a sequential code, relying on the capacities of certain processors to associate predicates with their assembly instructions to preserve the semantics origin of the program.
  • this “single-path” transformation technique therefore makes it possible to reduce the combinatorial of the possible execution paths of a program by leading to the obtaining of a single execution path.
  • the measurement of a single execution time of the program thus transformed is therefore sufficient to provide the WCET of the program.
  • the measurement process to determine the WCET is thereby simplified because the problem of the coverage rate of a program achieved by a measurement campaign is eliminated.
  • the invention aims to propose a technique for eliminating this increase in WCET execution time. To this end, it proposes a method of executing a program by a computer system having computing resources capable of executing sequences of instructions, comprising a conditional selection of a sequence of instructions from a sequence called satisfied and at least one so-called unsatisfied sequence. This process includes the following steps:
  • the distribution of the execution of the satisfied sequence and of the unsatisfied sequence consists in having the unsatisfied sequence executed by the first calculation resource and the satisfied sequence by the second calculation resource;
  • a data item written in memory by one of the first and of the second computing resource is subject to a visibility restriction for be visible only by that of the first and second computation resources which carried out the writing of the data in memory;
  • each of the first and second computing resources notifies the other of the first and the second computing resources of the termination of the execution of that of the satisfied and non-satisfied sequences which it executes;
  • the continuation of the execution of the program is carried out by the computing resource having executed the sequence of instructions selected by the conditional selection during the parallel execution of the satisfied sequence and the unsatisfied sequence;
  • FIG. 1 illustrates a standard conditional branching structure of the “Test if not” type
  • FIG. 2 illustrates the steps of the method according to the invention of distributing the sequences of an alternative and of their execution in parallel each by a different computing resource
  • FIG. 3 illustrates the steps of the method according to the invention for terminating the execution in parallel of the sequences of an alternative and for continuing the execution of the program by a computing resource.
  • the invention relates to a method for executing a program by a computer system, in particular a real-time system, having computing resources capable of executing sequences of instructions.
  • the computer system is for example a calculation processor, single-core or multi-core.
  • the program can in particular execute tasks, for example real-time tasks, programmed according to the “single-path” programming technique, the method according to the invention making it possible to accelerate the execution of this single-path program.
  • FIG. 1 shows the processing of a standard conditional branching structure present in a program P executed by a computing resource A.
  • This program consists of three sequences of instructions li, l 2 and l 3 .
  • the sequence of instructions li ends with a standard conditional branch instruction, the execution of which causes the evaluation “CS? »Of the satisfaction of a connection condition and the selection, depending on the result of this evaluation, of a sequence of instructions to be executed from the two possible sequences l 2 and I 3 .
  • sequence satisfied l 2 this is the sequence which is executed when the condition is satisfied, "O"
  • sequence not -satisfied l 3 this is the sequence which is executed when the condition is not satisfied, "N").
  • the invention proposes a new type of instruction known as the distribution of conditionally concurrent sequences (or more simply the distribution instruction in the following) which, when executed, achieves, due to the presence of a conditional selection d 'one of the sequences, a distribution of the execution of these different sequences in parallel on different computing resources.
  • the conditional selection can be a selection of the if-then-if type allowing one to come to select one of two possible sequences of an alternative, a satisfied sequence and an unsatisfied sequence.
  • the invention extends to a conditional selection of the switch type (or “switch”) making it possible to come and select a sequence from a plurality of possible sequences (typically at least three possible sequences), a satisfied sequence and at least one non-sequenced sequence. satisfied.
  • switch or “switch”
  • the program P is initially executed by a first calculation resource A and the execution of the sequence of instructions li comprises a conditional selection of a sequence of instructions from a satisfied sequence and at least one unsatisfied sequence.
  • This conditional selection may include the evaluation of the satisfaction of a branching condition and the selection, as a function of the result of this evaluation, of a sequence of instructions to be executed from two possible sequences.
  • the sequence of instructions li ends with a distribution instruction which, when executed by the computing resource A, causes the execution of the satisfied sequence and the unsatisfied sequence to be split between the first resource of calculation A and a second computing resource B of the computer system different from resource A.
  • conditional selection results from the execution, prior to the allocation instruction, of an instruction for testing the satisfaction of the condition.
  • the result of the execution of the test instruction is stored in a part of the status register of the micro-architecture and the distribution instruction comes to use this information to determine the address at which the program continues, c 'is to say the address of the sequence selected by the conditional sequence.
  • conditional selection results from the execution of the allocation instruction itself.
  • the distribution instruction takes as parameters the registers on which the condition must be evaluated and the result of this evaluation is used directly during the execution of the instruction to determine the address at which the program continues, ie the address of the sequence selected by the conditional sequence.
  • the distribution instruction is a branching instruction enriched to designate the second computing resource B.
  • the branching instruction can thus take as argument the second computing resource B, and in this case, it is during the construction of the binary that this information must be produced.
  • the branch instruction can take as argument an specific register (usable_resources register in the example below) to identify the second computing resource B among a set of usable resources.
  • the distribution can consist in having the unsatisfied sequence l 3 executed by the first computing resource A and the satisfied sequence l 2 by the second computing resource B.
  • the choice to deport the satisfied sequence allows, on the first computational resource A executing the unsatisfied sequence, to continue to preload the instructions of the program in a sequential manner and thus to avoid the introduction of any hazards in the execution of the program at the pipeline level. execution of micro-architecture instructions.
  • the distribution includes an RQ offset request. the execution of one of the satisfied and unsatisfied sequences, this request being formulated by the first calculation resource A intended for the second calculation resource B. When this offset request is accepted ACK by the second resource computation B, the program X which was being executed by the second computation resource B is suspended.
  • This suspension is considered as an interruption in the operation of the computing resource B, and the execution context of the program X is then saved.
  • a transfer TS, from resource A to resource B, of the state necessary to begin the execution of that of the satisfied and unsatisfied sequences that resource B must execute is carried out. This transfer concerns the values of the registers handled by the program P before the allocation instruction, the current stack structure of the program P as well as the identification of the computing resource A.
  • the satisfied sequence l 2 and the unsatisfied sequence l 3 are then executed in parallel, each by a computing resource among the first resource A and the second resource B.
  • the program P comprises a fourth sequence of instructions l 4 which must be executed once the parallel execution of the sequences l 2 and U ends.
  • the invention proposes that the instruction sequences U and I 3 each end with a parallelism termination instruction.
  • the sequence of instructions I 3 executed by the computation resource A is the first to be terminated and the execution of the parallelism termination instruction causes the computation resource A to notify TR the resource of calculation B of the termination of the sequence I 3 .
  • the execution of the parallelism termination instruction causes the calculation resource B to notify the calculation resource A of this termination.
  • resource B has executed the sequence l 2 which turns out to be the sequence selected by the selection conditional (the condition was satisfied in this case).
  • the execution of the program P is continued on the calculation resource B by executing the instructions of the instruction block l 4 , after the resource B has requested TE from the resource A to transfer NE from the state of notification registers for updating these locally.
  • the computing resource A can then resume the execution of the program X which was executed on the computing resource B before the parallel execution of the satisfied and not satisfied sequences l 2 and l 3 , by restoring the context d execution of this program since its backup.
  • each of the computing resources A and B calls this instruction for terminating parallelism in order, first of all, to wait for the termination of the other sequence so as to preserve the property of temporal predictability of a program, then , then, to determine at which instruction the execution of the program continues.
  • this parallelism termination instruction has the effect of selecting the computing resource on which the execution of the program will continue as well as of retaining only the data produced by the selected sequence.
  • the distribution and termination instructions proposed by the invention can be generated in a conventional manner by a compiler during the construction of a binary of the program processed.
  • a first strategy can consist in continuing the execution of the program P on the computing resource which has executed the unsatisfied alternative, which can induce a transfer of data from the other computing resource if the sequence selected is the satisfied sequence.
  • another strategy can consist in continuing the execution of the program P on the resource which has executed the selected sequence in order to avoid this transfer of data.
  • each write access creates a new copy of a data item and an identifier of the computing resource owner of this data item is then added to the meta-information associated with the data item. This identifier is thus used to determine whether a computing resource can access this data.
  • This mechanism for restricting the visibility of the data handled by a computing resource makes it possible to privatize a level of the memory hierarchy shared between the computing resources.
  • this piece of data must not be made visible to other programs or, via Inputs / Outputs (I / O), to the external environment of the system executing the programs.
  • Data having among its meta-information an identifier of a computing resource used to implement the parallel execution of a sequence of an alternative cannot then be updated in a memory for this objective or to a E / S. This restriction forces the program developer to implement communications to other programs or to I / O outside the parallel execution of alternative sequences.
  • this mechanism for restricting the visibility of data in the standard operation of a central memory, for example of the DRAM type, the use of this mechanism is limited to the memory hierarchy between computing resources and main memory.
  • a data item written into memory by one of the first A and of the second B computation resource is subject to a visibility restriction so as to be visible only by that of the first and second computation resources which have written the data in memory.
  • the method comprises the termination of the restriction of visibility of the data written in memory by the computing resource from the first and the second calculation resource which executed, during the parallel execution of the satisfied sequence and the unsatisfied sequence, the selected sequence of instructions.
  • These data those of the sequence l 2 executed by the resource B in the example of FIG. 3, are thus made visible to all of the calculation resources.
  • the method comprises, during the continuation of the execution of the program, the invalidation of the data written in memory by the calculation resource from among the first and the second calculation resource which has not executed, during the execution in parallel of the sequence satisfied and the sequence not satisfied, the sequence of instructions selected.
  • the data of the sequence I3 executed by the resource A are thus rendered invalid. It is possible to specify for a given program a maximum authorized number of simultaneous parallel executions of sequences of an alternative. This maximum authorized number cannot be greater than the number of computing resources of the hardware architecture considered.
  • the “single-path” code transformation technique can be applied to alternatives which cannot be the subject of the distribution according to the invention in order to maintain the construction of a single execution path.
  • the sequences selected and not selected by the conditional selection are executed one after the other by the first calculation resource.
  • the method according to the invention makes it possible not to request the conventional branch prediction units since by construction the two sequences of an alternative are executed. No rear transmission for updating the instruction counter at the step of reading the instructions within a microarchitecture is therefore necessary. However, an exploration of the choices of the computing resource to be used to continue the execution of the program after termination of the parallel execution alternative sequences can be performed to, for example, reduce the WCET of the program.
  • a table is associated with each calculation resource and each entry in the table contains a current program identifier P, a maximum authorized number of simultaneous parallel executions EPSmax, an EPSact counter of simultaneous parallel executions (initialized to 0) for this program P and two sets of size equal to the number of computing resources of the hardware architecture.
  • the first set, noted resources_usable indicates the computing resources usable for a parallel execution of the sequences of the alternatives of the program P
  • the second set, noted resources_used indicates the computing resources currently used by this same program P.
  • L ' initialization of usable_resources is the responsibility of a binary development phase, while reused_resources initially contains the computing resource used to start the execution of the program P.
  • An execution without parallelism results in an element size for l 'set resources_used, while an execution with parallelism requires that the size of this same set is greater than 1.
  • Two sets of notification registers also make it possible to indicate, by calculation resource, 1) the first failure of a request for the offset of a sequence of an alternative, the value of the instruction counter upon failure of this request (initially at 0) and the occurrence of subsequent failures, called the additional failure notification field, for example a bit, (initially disabled) and 2) the first attempt to exceed the maximum authorized number EPSmax, the value of the instruction counter during this attempt to pass (initially at 0) and the occurrence of subsequent attempts to pass, called the notification field for additional attempt to pass (initially invalidated).
  • an interrupt mechanism can be used in order to notify a resource for calculating the occurrence of such events. All of this information can be part of the program execution context which must be saved / restored each time the program is preempted / resumed.
  • the meta-information is supplemented by information indicating whether the data is globally visible to all of the resources (noted as global, by default valid), and a identification of the computation resource owner of the data (noted owner, by default invalidated).
  • the distribution instruction is treated as a conventional conditional branch instruction and the present method is not implemented.
  • all the registers of notification of attempt to exceed the maximum authorized number are updated with, if it is the first attempt to exceed (identifiable by an instruction counter value of 0), l address of the dispatch instruction. If this is not the first attempt to overtake, only the additional overtaking attempt notification field is validated. The method then waits for a new distribution instruction to resume step A-1.
  • step A-2 If these values are not identical, a computation resource B usable by this program but not yet used is identified by difference between the sets resources usable and resources resources used. The process then continues in step A-2.
  • the calculation resource A notifies a request for the offset of the execution of one of the sequences among the satisfied sequence and the unsatisfied sequence to the calculation resource B and waits for a response from the latter.
  • a request for execution offset consists of a pair comprising the identifier of the program P and the identifier of the computing resource A.
  • the identifier of the computing resource A issuing the request is verified as being part of the computing resources capable of issuing such a request, namely whether the computing resource A belongs to the resource_use set associated with this program.
  • the computing resource B notifies the rejection of the offset request to the computing resource A issuing the request.
  • All the registers of notification of failure to request an offset from an alternative sequence are updated on the calculation resource A with, if it is the first offset request (identifiable by a instruction counter value at 0), the address of the dispatch instruction. If this is not a first deport attempt, only the additional deport request failure notification field is validated. The process continues in step A-4.
  • the EPSact counter is incremented. Then, the following information is transmitted by the calculation resource A to the calculation resource B: all of the volatile and non-volatile registers handled by the program, the stack pointer, the current stack structure, the identifier of the first calculation resource, the value of the EPSact counter, the connection address specified by the alternative distribution instruction as well as the condition value generating the selection of one of the sequences of the alternative.
  • step A-6 The method continues to execute in step A-6.
  • resource B If the offset request is not accepted by resource B, resource B is removed from the set of usable resources associated with calculation resource A and another calculation resource that can be used but not yet used is identified (in the same way in step A1 of the method) and the method then continues in step A-2 on the computing resource A.
  • Resource B can possibly be added later to the set of usable resources associated with calculation resource A when conditions are met, for example when the application load of resource B is lower or when the system configuration changes and the resource B can be used again.
  • the distribution instruction is then treated as a conventional conditional branch instruction. All the registers of notification of failure of request for offset are however updated with, if it is the first attempt of offset for this program (identifiable by a value of instruction counter at 0) , the address of the alternative allocation instruction. If this is not a first attempt, only the additional offset request failure notification field is validated. The method then waits for a new distribution instruction to resume step A-1.
  • Step A-6 The value of the instruction counter of the computing resource A, issuing a request for offset, is positioned at the next instruction and the execution of the alternative not satisfied continues on the computing resource B, receiving an alternative offset request, to the instruction specified in the allocation instruction.
  • the set of usable resources is updated to include the computing resource B, having accepted the offset request.
  • a new copy of the modified data is inserted into the memory hierarchy and the fields of its global and owner meta-information are respectively invalidated and positioned at the identifier of the computing resource A or B.
  • the updating strategy (whether immediate or deferred) only concerns these caches and therefore excludes an impact on the main memory or on I / O in order to avoid making any data inconsistent with other programs or the external environment available.
  • the mechanism for updating a cache of the last level of the memory hierarchy is deactivated when the global and owner fields are respectively invalidated and positioned at the identifier of a computing resource. This rule for write access can only apply for access to the first shared level of a memory hierarchy, if no material consistency is ensured between the private levels of the memory hierarchy.
  • the request is transmitted to the first level of the memory hierarchy of the other computing resource B used in this level of execution in parallel of sequences of an alternative.
  • a variant is to carry out this transmission in parallel with the transmission of the request to the first level of the memory hierarchy of the computing resource A, however this increases the latency worse case of a memory request from a computing resource.
  • a compromise can be explored to allow such simultaneity during a certain number of hardware cycles and reduce the latency of these memory accesses made by the second computing resource.
  • Another variant is to transmit requests in parallel to all of the first levels of the memory hierarchy of the computing resources used by the program (those identified by the set lives_used).
  • the memory request of a calculation resource A can only consult the data for which the fields of its global and owner meta-information are respectively valid and invalidated (data modified by any other sequence of an alternative) or respectively invalidated and equal to the identifier of the computing resource A (data having been previously modified by the sequence executing on the computing resource A).
  • the process steps are as follows for processing the parallelism termination instruction.
  • the termination information for this alternative is transmitted to the computing resource B involved in parallel execution. If the calculation resource B has not completed the execution of the sequence which has been assigned to it, the calculation resource A then awaits notification of the termination by the calculation resource B.
  • the calculation resource A then inspects the value of the condition evaluation (for example calculated during the execution of the allocation instruction, or else by relying on the state register of the calculation resource) to determine if the sequence that it has just executed corresponds to the sequence selected by the conditional selection.
  • the condition evaluation for example calculated during the execution of the allocation instruction, or else by relying on the state register of the calculation resource
  • the calculation resource A propagates a request to the memory hierarchy to make valid the global field present in the meta-information associated with each data modified during parallel execution, identifiable by the fact that the field owner is set to the identifier of the calculation resource A.
  • the owner field is also invalidated during the processing of this request.
  • the calculation resource A propagates a request to invalidate in a conventional manner the data modified during parallel execution, identifiable by the fact that the owner field is positioned at the identifier of the computing resource A. This last field is also invalidated and the global field is reset. Furthermore, the pipeline is emptied, the memory area used by the stack of resource A is invalidated.
  • the counter ESPmax is decremented and the computation resource which is not selected to continue the execution is removed from the set of resources used.
  • the execution context of the selected sequence (all the volatile and non-volatile registers handled by the program, the pointer to stack, the complete stack structure) must be transferred from the computation resource having executed this selected sequence. Furthermore, the data handled by the program and stored in the private levels of the memory hierarchy associated with the calculation resource having executed the selected sequence must be propagated to the first level shared between the two calculation resources of the memory hierarchy. Whatever the choice of the calculation resource, if the notification registers associated with two calculation resources, used in the parallel execution which ends, notify the first failures of request for offset or attempt to overtake, only the additional notification fields associated with these events are validated on the calculation resource selected to continue program execution. Otherwise, each notification register associated with the calculation resource selected to continue the execution of the program is updated with the information of the other calculation resource used.
  • the value of the instruction counter of the calculation resource retained to continue the execution of the program is positioned at the jump address specified in the parallelism termination instruction.
  • an interruption of end of execution in parallel is notified, this in order to allow for example the resumption of the execution of other programs.
  • step C-3 can be anticipated at step C-2 to possibly reduce the additional cost of this notification by parallelizing its execution with the expectation of the termination of the parallelism. To avoid any inconsistency on the values handled by the other sequences of the alternative, this anticipation must be carried out on data not exploited by these same sequences in the course of execution.
  • the method as previously described comprises a step of measuring the duration of execution of the program and a step of determining a WCET of the program.
  • the invention is not limited to the method as described above but also extends to a computer program product comprising program code instructions, in particular the previously described instructions for distributing sequences and terminating parallelism, which , when the program is executed by a computer, lead the latter to implement this process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Devices For Executing Special Programs (AREA)
  • Advance Control (AREA)
EP19758795.9A 2018-07-18 2019-07-15 Verfahren zur beschleunigung der ausführung eines einzelpfadprogramms durch parallele ausführung von bedingt konkurrierenden sequenzen Withdrawn EP3807757A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1856659A FR3084187B1 (fr) 2018-07-18 2018-07-18 Procede d'acceleration de l'execution d'un programme a chemin unique par execution en parallele de sequences conditionnellement concurrentes
PCT/FR2019/051768 WO2020016511A1 (fr) 2018-07-18 2019-07-15 Procédé d'accéleration de l'exécution d'un programme à chemin unique par exécution en parallèle de séquences conditionnellement concurrentes

Publications (1)

Publication Number Publication Date
EP3807757A1 true EP3807757A1 (de) 2021-04-21

Family

ID=65031459

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19758795.9A Withdrawn EP3807757A1 (de) 2018-07-18 2019-07-15 Verfahren zur beschleunigung der ausführung eines einzelpfadprogramms durch parallele ausführung von bedingt konkurrierenden sequenzen

Country Status (4)

Country Link
US (1) US20210271476A1 (de)
EP (1) EP3807757A1 (de)
FR (1) FR3084187B1 (de)
WO (1) WO2020016511A1 (de)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9639371B2 (en) * 2013-01-29 2017-05-02 Advanced Micro Devices, Inc. Solution to divergent branches in a SIMD core using hardware pointers
FR3004274A1 (fr) * 2013-04-09 2014-10-10 Krono Safe Procede d'execution de taches dans un systeme temps-reel critique
GB2505564B (en) * 2013-08-02 2015-01-28 Somnium Technologies Ltd Software development tool

Also Published As

Publication number Publication date
US20210271476A1 (en) 2021-09-02
WO2020016511A1 (fr) 2020-01-23
FR3084187A1 (fr) 2020-01-24
FR3084187B1 (fr) 2021-01-01

Similar Documents

Publication Publication Date Title
EP3129874B1 (de) System für verteiltes rechnen mit implementierung eines nichtspekulativen transaktionellen hardware-speichers und für zur verwendung davon für verteiltes rechnen
US8539486B2 (en) Transactional block conflict resolution based on the determination of executing threads in parallel or in serial mode
FR2792087A1 (fr) Procede d'amelioration des performances d'un systeme multiprocesseur comprenant une file d'attente de travaux et architecture de systeme pour la mise en oeuvre du procede
FR2881239A1 (fr) Procede de gestion d'acces a des ressources partagees dans un environnement multi-processeurs
FR2950714A1 (fr) Systeme et procede de gestion de l'execution entrelacee de fils d'instructions
FR2881242A1 (fr) Procede non intrusif de journalisation d'evements internes au sein d'un processus applicatif, et systeme mettant en oeuvre ce procede
FR2882449A1 (fr) Procede non intrusif de rejeu d'evenements internes au sein d'un processus applicatif, et systeme mettant en oeuvre ce procede
US20030135720A1 (en) Method and system using hardware assistance for instruction tracing with secondary set of interruption resources
TW201227520A (en) Virtual machine branching and parallel execution
EP1212678A2 (de) Verwaltungsprotokoll, verifikationsverfahren und transformierung eines ferngeladenen programmfragments und korrespondierende systeme
EP3286647A1 (de) Platzierung einer berechnungsaufgabe auf einem funktionell asymmetrischen prozessor
EP3295293B1 (de) Threadsichere sperrfreie gleichzeitige schreiboperation zur verwendung mit einer multithread-inline-protokollierung
EP0637798B1 (de) Verklemmungsanalyseverfahren in einem Betriebssystem
US9075726B2 (en) Conflict resolution of cache store and fetch requests
FR2881306A1 (fr) Procede de journalisation non intrusive d'evenements externes aupres d'un processus applicatif, et systeme mettant en oeuvre ce procede
CA2348069A1 (fr) Systeme et methode de gestion d'une architecture multi-ressources
FR2881308A1 (fr) Procede d'acceleration de la transmission de donnees de journalisation en environnement multi ordinateurs et systeme utilisant ce procede
EP2498184A1 (de) Vorrichtung zur Verbesserung der Fehlertoleranz eines Prozessors
US7644396B2 (en) Optimal program execution replay and breakpoints
EP3807757A1 (de) Verfahren zur beschleunigung der ausführung eines einzelpfadprogramms durch parallele ausführung von bedingt konkurrierenden sequenzen
FR2881309A1 (fr) Procede d'optimisation de la transmission de donnees de journalisation en environnement multi-ordinateurs et systeme mettant en oeuvre ce procede
FR2801693A1 (fr) Procedes et appareils pour detecter la presence eventuelle d'exceptions
WO2012038000A1 (fr) Procede de gestion de taches dans un microprocesseur ou un ensemble de microprocesseurs
FR2881244A1 (fr) Procede de comptage d'instructions pour journalisation et rejeu d'une sequence d'evenements deterministes
EP3131005A1 (de) In einem schienenfahrzeug einbaubare vorrichtung, die ein startprogramm mit einer oder mehreren startpartitionen umfasst, und assoziertes schienenfahrzeug und eisenbahnsystem

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210112

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20220530

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20240126