WO2022009741A1 - 電子制御装置 - Google Patents

電子制御装置 Download PDF

Info

Publication number
WO2022009741A1
WO2022009741A1 PCT/JP2021/024626 JP2021024626W WO2022009741A1 WO 2022009741 A1 WO2022009741 A1 WO 2022009741A1 JP 2021024626 W JP2021024626 W JP 2021024626W WO 2022009741 A1 WO2022009741 A1 WO 2022009741A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing
memory area
control device
electronic control
memory
Prior art date
Application number
PCT/JP2021/024626
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
辰也 堀口
祐 石郷岡
敏史 大塚
一 芹沢
隆 村上
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Publication of WO2022009741A1 publication Critical patent/WO2022009741A1/ja

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/167Interprocessor communication using a common memory, e.g. mailbox
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication

Definitions

  • the present invention relates to an electronic control device.
  • bus width, memory size, cache size parameters related to shared resources differ for each processor, so the development results on one processor can be displayed on another processor. It is not simple to port to, and the man-hours for design, development, and verification will increase as well.
  • the LET (Logical Execution Time) paradigm which is a time-synchronized timing design method for timing design
  • AUTOSAR which is a standard for in-vehicle software architecture.
  • This method is characterized by timing design centered on the determination of memory access timing in the upstream process, and each process keeps this timing at the time of execution, so that the memory state at the start / end time of each process is constant. Therefore, it is possible to prevent the development and verification pattern increase due to the combination of processor states, and it is possible to contribute to the reduction of man-hours.
  • the timing design result for one processor can be ported as it is as long as the timing can be satisfied in terms of performance even in another processor.
  • Prior Patent Document 1 shows a solution using a buffer as a method for reducing the communication overhead between cores in such a multi-core processor.
  • an implementation method for quantity management with a fixed packet size as a unit is defined on the premise of communication equipment, but in an industrial control device, for each data communication between processes, The feature is that the amount of communication is different. A high-speed data transfer method is required even for processing with such different memory sizes. Further, in an industrial electronic control device, from the viewpoint of driving an actuator that operates in a fixed control cycle, it is assumed that each process or a series of multiple processes operate in a plurality of different cycles. In the delivery method, it is required to deal with the case where a plurality of processes related to data input / output operate in different cycles.
  • An object of the present invention is to provide an electronic control device capable of improving the efficiency of data transfer between applications in a time-synchronized design.
  • the present invention has a processor including a plurality of cores executing a plurality of processes operating at different cycles, and a plurality of accessible cores for each of the plurality of cores performing the plurality of processes.
  • An electronic control device comprising a memory including a memory area, and changing the memory area accessible by a predecessor process for writing data and a subsequent process for reading data according to the progress of the process.
  • the processor searches for the writable memory area in which the latest result of the processing has not been written and is not being read, and if the writable memory area does not exist, the processor secures a new memory area. By setting the new memory area as the write destination of the result of the process, the result of the process is passed from the preceding process to the succeeding process.
  • the cycle of processing B is shorter than the cycle of processing A.
  • the cycle of processing B is shorter than the cycle of processing A.
  • FIG. 3 it is a diagram illustrating a memory management table Addition method at time T 4 immediately before. It is a flow diagram which shows the middleware operation in the 3rd Example of this invention. It is a flow diagram which shows the processing behavior at the time of the execution time constraint violation in the 4th Example of this invention. It is a figure which shows the method for continuing the process when the execution time constraint is violated in the 4th Example of this invention. It is a flow diagram which shows the middleware operation at the time of the execution time constraint violation in the 4th Example of this invention.
  • FIG. 1 shows a configuration of a multi-core processor 1 mounted on an electronic control device for vehicle control and peripheral devices, which is the object of the present invention.
  • the multi-core processor 1 is a multi-core processor having a plurality of CPU cores 2.
  • a multi-stage cache configuration having a level 1 cache (L1 cache) 21 owned by each CPU core 2 and a level 2 cache (L2 cache) 22 shared by a plurality of cores is mainly used.
  • L1 cache level 1 cache
  • L2 cache level 2 cache
  • the processing efficiency here is a performance index measured by the time required to complete the application processing. Therefore, not only the execution order of each application specified in advance, but also the internal state change of the processor (L2 cache 22 status, internal bus 3 load status) that dynamically changes according to the operation of each application. , Application processing status and order, memory access timing changes.
  • the operating speed and quantity of the CPU core 2 When porting an application designed and verified on one multi-core processor 1 to another multi-core processor, the operating speed and quantity of the CPU core 2, the operating speed and the bus width of the bus 3, and the cache configuration (L1). Since changes in the components of the multi-core processor 1 such as cache 21, L2 cache 22, cache hierarchical structure such as lower L3 cache) and its size occur, application processing efficiency, execution order, and memory access timing may change. Change.
  • the value required for the operation is read from the shared memory area accessible from all processes to the local memory area accessible only by itself (Fig.). It is configured to write the operation result to the shared memory at the end of the processing section (the part described as Copy-in in the figure) (Copy-out in the figure), and the operation is executed at any timing between this reading and writing.
  • a time-synchronized timing design realization method using two buffers as shown in FIG. 3 is known.
  • the processing section and period in the figure are from the start of processing (reading data from the shared memory area to the local memory area) to the completion of processing (writing data in the opposite direction when the calculation is completed) in the time-synchronized timing design method, respectively. It indicates the width of time and the unit for repeating such a processing interval.
  • the process A is performed on the shared memory. It is composed of two memory areas, a memory area for write access and a memory area for read access by process B.
  • a i and B i are symbols representing the processes of the i-cycles of processes A and B, respectively.
  • the process may be preempted and restarted by a process (not shown) other than A and B.
  • processing A i and B i point to the buffer area on the opposite side of the buffer area used immediately before, respectively , so that processing A i and B i are B i. It is possible to perform its own calculation using Res (B i-1 ) which is the result of -1 and write Res (A i-1 ) which is the calculation result of processing A i to the same area. Similarly, B also reads the processing result Res (A i-1 ) in the previous cycle of processing A and writes its own calculation result Res (B i ), so that the calculation processing proceeds. It should be noted that the processes A 1 and B 1 executed for the first time in this figure shall start the calculation without inputting data.
  • the operation result written by the processing A in the i-th cycle is processed by B in the next i + 1th cycle and processing B in the i-th cycle. Since A refers to the calculation result written in the above in the next i + 1th cycle and continues the calculation, it is possible to continue the calculation without copying the data.
  • this memory area is held only between two processes A and B having a data-dependent and order-dependent relationship and always has different buffer areas from each other, the respective memory areas are held during the processing sections of the processes A and B. It can be regarded as a local memory area that can be accessed only by processing.
  • the cycles of process A and process B are not always the same.
  • the input of this processing is linked to, for example, a laser range sensor that operates every 50ms and a vehicle speed sensor that operates every few ms. Processing is performed and the mapping result is updated.
  • FIG. 4 shows an example of this operation when the cycles of process A and process B are different.
  • the processing section and the period match will be described by omitting the swap timing description, but even if the two do not match, the arithmetic processing is realized by the same method.
  • the timing of memory access and the number of required buffer areas differ depending on the cycle of processes A and B.
  • the figure shows an example in which the period of A shown by the broken line (equal to the processing interval in this example) and the period of B shown by the alternate long and short dash line are different.
  • An example of proceeding with processing using the latest processing result is shown. That is, the processes A and B each detect the process of the other party that has ended before the start of its own cycle, and read the memory area.
  • This example shows an example in which the cycle of process A is longer than that of process B.
  • the result Res (A i ) of a certain process A i is two or more. Multiple processing of Res (B j ), Res (B k ), ... There is a case where it is read in the calculation process.
  • the result Res (A i ) of a certain process A i can be used for the calculation of up to two processes B. Become.
  • the result Res (A i) is a memory area written in a certain process A i, since the process B j is destroyed in the process of calculating the operation result Res (B j), the process B k Since it is necessary to separately generate a memory area for calculating the calculation result Res (B j ), in this embodiment, it is necessary to newly copy the calculation result to another memory area after the calculation of the process A i is completed.
  • the number of places to copy the calculation result is determined by the cycle of process A and process B.
  • the result Res (A 2 ) of the process A 2 in the figure can be used for the process B 4 and the process B 5.
  • the result Res (A 2 ) of the process A 2 in the figure can be used for the process B 4 and the process B 5.
  • Res (A 2) the copy processing from the second half of the process A 2 that write to region 2 is started Cp (Res (A 2)) is an example to start operation, for example, a memory area 2 At the same time as processing A 2 secures, the copy destination memory area 4 may also be secured.
  • the data copy overhead between process A and process B becomes similar to the case where data is exchanged using two memory areas between processes A and B having the same cycle as described above. It will be reduced. This is because the data copy process Cp (Res (A i )) is also performed during the process A i , so for example, when the subsequent process B 5 of the process A 2 starts, the data is already in the memory area. Because.
  • a method of defining dedicated middleware can be considered.
  • middleware application developers can concentrate on the behavior of applications without being aware of detailed memory management, and can also free applications from changing factors such as OS and hardware in industrial controls. It is possible and most consistent with the improvement of portability between different hardware (processors), which was the purpose of time-synchronized timing design.
  • the above middleware manages the data on the shared memory that is updated for each processing section in each process, notifies the appropriate memory area at the start of each process, and has a memory area used for data transfer between each process. Two functions are required to maintain data transfer by creating a new memory area on the shared memory when there is a shortage. Each will be described below.
  • FIG. 5 shows an example of a data management method on the shared memory.
  • the necessary items in this method are to prevent other processes from accessing the memory area being calculated by one process, and to provide a memory area in which appropriate input information is written when each process is started. There are two points: handing over.
  • the appropriate input information here is the latest calculation result of the other party processing that has been completed by the start time of the processing.
  • the former is managed, for example, as the occupancy status of the memory area for data management on the memory, and the latter is managed as the position of the latest calculation result stored in the memory area.
  • the latest calculation result is defined in both directions.
  • the middleware searches for the item of process A from process B in the column of the latest process results in the management table of FIG. 7.
  • the latest processing result calculation result of processing B 1
  • the area is the area to be used by the processing A 2. Therefore, a check (dotted line circle in FIG. 7) is checked in the memory area 2 portion of the occupation status column by process A in the management table, and the address of the memory area is passed to process A to start process A 2.
  • the check of the memory area 3 is removed from the occupancy status table by the process B, and instead, the check is put in the place of the area 3 of the latest value table from the process B to the process A.
  • the memory management table is updated as shown in FIG. By updating the management table in this way, if another process A is started immediately after time T 2 , the memory area 3 can be used to perform an operation using the latest process result of process B. It will be possible.
  • the time T 0 shown in FIG. 10 is the timing at which the writing of the calculation result is started in the process A.
  • process A is defined as a long cycle with respect to process B, and it is necessary to provide a plurality of memory areas for data transfer from process A to process B. Therefore, the process A needs to secure a plurality of memory areas, but since the memory area secured at the same timing is only the area 1, it is necessary to secure a new write destination through the middleware.
  • the middleware refers to the management table of FIG. 11, but the reserved memory area 1 and the memory area 3 are occupied by the processing A 1 and the processing B 2 , respectively, and the memory area 2 is the latest of the processing B.
  • the process A 1 is the memory area does not exist that can be newly allocated. Therefore, a part of the shared memory ( corresponding to the write size of process A 1 ) is newly secured as a memory area for data transfer between process A and process B, and the occupied state on the management table is updated to A, and the memory area 4 is used. To create. By passing this memory area 4 to the process A, the latest value of the process A is written in both memory areas at the end of the process A 1.
  • the above-mentioned memory management method using middleware is provided, and as in the case of data transfer using two memory areas between processes A and B having the same cycle as described above, between processes A and B. Data copy overhead is reduced in.
  • an execution time constraint for each process or a series of a plurality of processes from the viewpoint of driving an actuator that operates along a fixed control cycle. If this is violated, the actuator control command value may not be output, or a command value inappropriate for the sensor input value may be output. Therefore, incomplete processing corresponding to the actuator is required.
  • a control continuation method using another system using a redundant system configuration, a degenerate control method, and the like can be mentioned.
  • the middleware since the scheduled end time of the application process is specified at the time of design, the above-mentioned incomplete process is executed for the case where the process is not completed by this timing.
  • the middleware determines the necessity of executing incomplete processing, for example, by setting a flag when the middleware starts the processing and canceling this flag when the processing is completed, the middleware checks at the processing end time.
  • the middleware checks at the processing end time.
  • the middleware Since the middleware operates in a form that matches the start time of each process as described above, it is started by, for example, a timer interrupt at the process start time. In other words, a timer interrupt is inserted at the start time of each of the plurality of processes. After that, the latest processing result storage destination is searched from the management table (S100), and the occupancy status location of the management table is updated to secure the area (S101).
  • the application process is started (S102).
  • S102 when it is necessary to write data to a plurality of memory areas due to the length of the application processing cycle and such a write destination area is not secured, a new memory area is secured (S103).
  • S103 and S104 are skipped and S105 is executed. Then, at the end time of application processing, a timer interrupt is inserted as at the start time. In other words, a timer interrupt is inserted at the end time of each of the plurality of processes.
  • the latest value status of the memory management table is updated (S107), and the memory occupancy status is released (S108). This completes the middleware processing for one application processing.
  • the electronic control device of the present embodiment includes a processor (multi-core processor 1) including a plurality of cores (CPU cores 2) that execute a plurality of processes operating at different cycles, and the plurality of processes that execute the plurality of processes.
  • a memory shared memory including a plurality of memory areas that can be accessed for each core (CPU core 2), and the memory that can be accessed by a preceding process for writing data and a subsequent process for reading data. Change the area according to the progress of processing.
  • the processor (multi-core processor 1) searches for a writable memory area in which the latest processing result has not been written and is not being read, and secures a new memory area when there is no writable memory area. , By setting a new memory area as the write destination of the processing result, the processing result is passed from the preceding processing to the succeeding processing.
  • the electronic control device includes a table that stores the occupancy status for each memory area and the presence / absence of the latest value of the processing result.
  • the processor determines that the writable memory area does not exist for all the memory areas of this table if the occupancy status indicates that it is occupied or the presence or absence of the latest value indicates that the latest value is stored.
  • Example 2 Hereinafter, the second embodiment according to the present invention will be described with reference to the drawings.
  • the assumed CPU configuration (FIG. 1) and the time-synchronized timing design method to be applied (FIG. 2) conform to the first embodiment, but in this embodiment, a one-way data dependency relationship between a plurality of processes A and B.
  • An application example of the present invention in a certain case, that is, a case in which the process B is executed using the execution result of the process A but the reverse does not exist, will be described mainly focusing on the difference from the first embodiment.
  • FIG. 13 shows a processing configuration as an example of the present invention.
  • the memory area is composed of three different memory areas (areas 1 to 3).
  • the processing B since the processing B is configured to read the calculation result of the processing A, in the shared memory area description in the figure, the processing A only writes its own calculation result. , Process B describes only the reading of the operation result of process A, respectively.
  • the same memory configuration can be adopted even when the cycle of process A is further smaller than the cycle of process B. .. In this way, in the case where the cycle of process B is more than twice the cycle of process A, the process result is overwritten in the memory area that is not the latest value in the memory management table shown in Example 1. This eliminates the need for further addition of the shared memory area.
  • the process B 2 is read out the operation result of the processing A 2 Res (A 2) Just do it. Therefore, as shown in the figure, the operation result Res (A 3 ) of the process A 3 is allowed to overwrite the operation result Res (A 1 ) of the process A 1 in the memory area 2.
  • the above configuration that is, only three shared memory areas as in this embodiment.
  • FIG. 16 shows a processing configuration as an example of the present invention.
  • the process A is configured to perform only writing and the process B is configured to perform only reading, but a case where the processing section of A and the period of A do not match will be described as an example.
  • the processing interval and the period do not necessarily have to match, so that such a configuration can be adopted.
  • the processing A 4 is completed before the calculation result of the processing A 3 Res (A 3) is read out, the processing B 4 arithmetic processing A 4
  • the result Res (A 4 ) can be used to start the operation.
  • the utilization rate of the shared memory area is lowered, and the memory efficiency is deteriorated.
  • the minimum required two memory areas are configured on the shared memory (similar to the case of the same cycle), and the third memory area is dynamically allocated from the shared memory as needed, and it is determined that it is unnecessary. If this is the case, the spare area can be accommodated between the processing configurations of the entire control device, and the memory efficiency can be further improved.
  • the memory area is composed of two different memory areas, and the processor (multi-core processor 1) secures a new memory area when there is no writable memory area. After that, for example, under the condition that the memory area has three or more memory addresses, the processor (multi-core processor 1) determines that one memory address is not the write destination of the latest processing result and is not being read. In, the one memory address is released.
  • the above configuration is shown by the method of realizing the above configuration by memory management by middleware.
  • the memory area management process is realized by the processor (multi-core processor 1) executing dedicated middleware.
  • the middleware processing described in the present embodiment may be performed inside the OS or each processing.
  • the memory state immediately after time T 3 is as shown in FIG. 19, the memory area 2 is occupying B and the latest processing result is written, and the spare area is blank. In this way, with the completion of processing A 4 , the spare area can be released in the case where the spare area is not occupied and the latest value is not written. When the cycle of process A is longer than that of process B, the latest value is not written and the area can be released when process A does not need to occupy the spare area.
  • middleware processing it is possible to apply to all cases by adopting a configuration that determines whether memory can be released at the start of processing (when the cycle of processing A is shorter than processing B, Overhead continues until the next process A starts).
  • FIG. 20 shows such a middleware behavior flow.
  • the basic configuration is the same as that of the first embodiment, but since the process A and the process B perform either the write process or the read process, the flow is slightly different depending on this discrimination mechanism. Therefore, when the process is started, a conditional branch of read process or write process is entered. After that, the middleware processing of the first embodiment is followed. What is added in this example is the case where a writable area exists on the write processing side. At this time, if the number of managed memory areas is 3 or more and the spare area does not hold the latest value and is not being read, it can be released and the memory area is deleted from the management table (S109).
  • this embodiment can be realized even with a configuration of two memory areas that are always allocated on the management table and one memory area that is dynamically allocated / released as needed.
  • the release condition of the management memory area is that the management address is three or more, and the latest value is not held and the reading is not in progress, so that the address can be deleted from the management table.
  • the S109 process changes to search for and delete the memory area.
  • the industrial electronic control device is configured to calculate the control command value based on the input of sensor information and control the actuator.
  • the actuator in order to correctly control the actuator that operates periodically, there is generally an execution time constraint between the sensor input and the control command value output.
  • the software behavior among a plurality of cores described above can easily change according to the processor state at the time of execution, so that the execution time fluctuates, for example.
  • Such fluctuations in execution time are dealt with by margin design at the time of design, but the completion of processing is not always guaranteed due to strict execution time restrictions and competition of various resources. Therefore, it is necessary for the middleware to confirm the completion of processing and to take measures when it is not completed.
  • FIG. 21 shows an example in which process A 2, which is the second execution of process A in the figure, could not complete the process within the specified process section due to the influence of the execution time fluctuation.
  • the middleware stops process A 2 and does not update the memory management table, so that process B 4 and process B 5 that should have used the operation result Res (A 2 ) of process A 2 Control so that the value whose calculation is not completed is not used.
  • These processes continue to be controlled by using the operation result Res (A 1 ) of the process A 1.
  • the processor that executes the middleware performs incomplete processing that forcibly stops the processing when it detects that the processing is incomplete at the end times of each of the plurality of processes.
  • Another method is to extend the execution time of process A by using the design-time margin of the subsequent process that processes using the result of the preceding process, such as process B for process A, in such a case. , It is a method to pass the latest processing result to the subsequent processing B.
  • processor that executes the middleware detects that the processing has not been completed at the end time of each of the plurality of processes, the processor that executes the middleware starts the subsequent processing that performs the calculation using the calculation result of the processing. By delaying the time by the grace time specified at the time of design, the calculation is continued using the latest calculation processing result in the subsequent processing.
  • both the process A 2 subsequent processing process which is the B 4, and the influence of delay in processing A 2 by the number of cores leads to reduction own processing time processing A 3 are both margin time, i.e. the processing block There is a difference between the execution time of each process and the execution time of each process. This time is usually used to absorb the fluctuation of the execution time on the multi-core as described above, but since the worst execution time at the time of design does not always occur, the same margin time should be used. Is possible.
  • the margin time is designed to be the worst execution time + 20%, and 10% of the margin time can be applied to such a delay in advance processing.
  • a time is called a delayed absorption time (FIG. 22).
  • the affected process process B 4 and, for example, 2 cores
  • the middleware acquires the latest delayed absorption time end time (time T 4 in the figure) from task information, etc., and continues processing A 2 until the same time. If process A 2 is completed at the same time, the same termination process as the memory management described in Example 3 is performed, and if process A 2 is not completed at the same time, the above-mentioned first method is used. Take the same forced termination process.
  • FIG. 23 shows a flow chart relating to the middleware behavior in this embodiment.
  • the present invention can comply with the real-time constraints required for industrial control devices.
  • the second countermeasure shown above even if the real-time constraint is violated, processing is continued or processing is continued using the latest processing result when it is possible to deal with it using the margin time in processing execution. By making it possible, it is possible to continue the control calculation by the industrial control device and improve its accuracy.
  • the present invention is not limited to the above-described embodiment, but includes various modifications.
  • the above-mentioned examples have been described in detail in order to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the described configurations.
  • it is possible to replace a part of the configuration of one embodiment with the configuration of another embodiment and it is also possible to add the configuration of another embodiment to the configuration of one embodiment.
  • each of the above configurations, functions, etc. may be realized by hardware, for example, by designing a part or all of them with an integrated circuit. Further, each of the above configurations, functions, and the like may be realized by software by the processor interpreting and executing a program that realizes each function. Information such as programs, tables, and files that realize each function can be placed in a memory, a recording device such as a hard disk or SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD.
  • a recording device such as a hard disk or SSD (Solid State Drive)
  • a recording medium such as an IC card, SD card, or DVD.
  • the embodiment of the present invention may have the following aspects.
  • An electronic control unit having a processor including a plurality of cores, wherein the processor executes a plurality of processes operating in different cycles, and the plurality of processes accessible to each of the plurality of cores performing the plurality of processes.
  • the latest result of the processing in the electronic control device which has a memory area and changes the memory area which can be accessed by the preceding processing for writing data and the succeeding processing for reading data according to the progress of the processing. Is not written and is not being read, the writable memory area is searched, and if the writable memory area does not exist, a new memory area is secured and the new memory area is used for the processing.
  • An electronic control device characterized in that the result of the processing is passed from the preceding processing to the succeeding processing by setting the writing destination of the result.
  • Multi-core processor 2 CPU core 21: L1 cache 22: L2 cache 3: Internal bus 4: External memory 5: Sensor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Multi Processors (AREA)
  • Memory System (AREA)
PCT/JP2021/024626 2020-07-07 2021-06-29 電子制御装置 WO2022009741A1 (ja)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020117161A JP7425685B2 (ja) 2020-07-07 2020-07-07 電子制御装置
JP2020-117161 2020-07-07

Publications (1)

Publication Number Publication Date
WO2022009741A1 true WO2022009741A1 (ja) 2022-01-13

Family

ID=79553102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/024626 WO2022009741A1 (ja) 2020-07-07 2021-06-29 電子制御装置

Country Status (2)

Country Link
JP (1) JP7425685B2 (enrdf_load_stackoverflow)
WO (1) WO2022009741A1 (enrdf_load_stackoverflow)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115878549A (zh) * 2023-03-03 2023-03-31 上海聪链信息科技有限公司 核间通信系统
WO2025022521A1 (ja) * 2023-07-24 2025-01-30 日立Astemo株式会社 車載制御装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2023161698A (ja) * 2022-04-26 2023-11-08 日立Astemo株式会社 電子制御装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010119932A1 (ja) * 2009-04-17 2010-10-21 日本電気株式会社 マルチプロセッサシステム、マルチプロセッサシステムにおけるメモリ管理方法及び通信プログラム
JP2010244096A (ja) * 2009-04-01 2010-10-28 Seiko Epson Corp データ処理装置、印刷システムおよびプログラム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010244096A (ja) * 2009-04-01 2010-10-28 Seiko Epson Corp データ処理装置、印刷システムおよびプログラム
WO2010119932A1 (ja) * 2009-04-17 2010-10-21 日本電気株式会社 マルチプロセッサシステム、マルチプロセッサシステムにおけるメモリ管理方法及び通信プログラム

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115878549A (zh) * 2023-03-03 2023-03-31 上海聪链信息科技有限公司 核间通信系统
WO2025022521A1 (ja) * 2023-07-24 2025-01-30 日立Astemo株式会社 車載制御装置

Also Published As

Publication number Publication date
JP2022014679A (ja) 2022-01-20
JP7425685B2 (ja) 2024-01-31

Similar Documents

Publication Publication Date Title
WO2022009741A1 (ja) 電子制御装置
USRE48736E1 (en) Memory system having high data transfer efficiency and host controller
US7890725B2 (en) Bufferless transactional memory with runahead execution
US7945741B2 (en) Reservation required transactions
CN105094084B (zh) 支持多核控制器上的相干数据访问的服务和系统
GB2348306A (en) Batch processing of tasks in data processing systems
JPWO2010097925A1 (ja) 情報処理装置
CN118377637A (zh) 减少冗余缓存一致性操作的方法、装置、设备和存储介质
US20060149940A1 (en) Implementation to save and restore processor registers on a context switch
CN113946445A (zh) 一种基于asic的多线程模块及多线程控制方法
CN114741036A (zh) 一种异构多核处理器下日志管理的方法
CN114780248A (zh) 资源访问方法、装置、计算机设备及存储介质
JP5999216B2 (ja) データ処理装置
US20030014558A1 (en) Batch interrupts handling device, virtual shared memory and multiple concurrent processing device
EP0550976B1 (en) Memory accessing device using address pipeline
JP2011138401A (ja) プロセッサシステム、プロセッサシステムの制御方法、及び制御回路
JP4755232B2 (ja) コンパイラ
JP2004326633A (ja) 階層型メモリシステム
JP2021060758A (ja) 車両制御装置
JP2022092692A (ja) 電子制御装置及びタスク実行制御方法
JP2003131893A (ja) 演算処理システム、コンピュータ・システム上でのタスク制御方法、並びに記憶媒体
JP2025126606A (ja) 電子制御装置、電源供給の管理方法及び電源供給の管理プログラム
KR0181487B1 (ko) 버퍼 램을 이용한 프로그램 구동 장치 및 방법
CN119668646A (zh) 无感ota实时刷写方法、电子控制单元、计算机设备及介质
JPS59163647A (ja) タスク管理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21838271

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21838271

Country of ref document: EP

Kind code of ref document: A1