CN113835855A - Interrupt system-based multi-task access method, processor and task access system - Google Patents

Interrupt system-based multi-task access method, processor and task access system Download PDF

Info

Publication number
CN113835855A
CN113835855A CN202111047691.7A CN202111047691A CN113835855A CN 113835855 A CN113835855 A CN 113835855A CN 202111047691 A CN202111047691 A CN 202111047691A CN 113835855 A CN113835855 A CN 113835855A
Authority
CN
China
Prior art keywords
task
interrupt
target
storage space
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111047691.7A
Other languages
Chinese (zh)
Inventor
张冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Autel Intelligent Automobile Corp Ltd
Original Assignee
Autel Intelligent Automobile Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autel Intelligent Automobile Corp Ltd filed Critical Autel Intelligent Automobile Corp Ltd
Priority to CN202111047691.7A priority Critical patent/CN113835855A/en
Publication of CN113835855A publication Critical patent/CN113835855A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/065Partitioned buffers, e.g. allowing multiple independent queues, bidirectional FIFO's

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bus Control (AREA)

Abstract

The invention relates to the technical field of computers, and discloses a multitask access method based on an interrupt system, a processor and a task access system. The method comprises the following steps: responding an interrupt request sent by an interrupt system triggered by a target task, wherein an interrupt permission mark of a processor is kept in a non-turn-off state, determining the interrupt priority of the target task according to the interrupt request, determining a target FIFO storage space according to the interrupt priority of the target task, and accessing the target FIFO storage space under the condition that a data loss prevention model is met. On one hand, the embodiment can read/write the target FIFO storage space under the condition of not switching off the interrupt, avoids frequently switching between the on-interrupt mode and the off-interrupt mode, is favorable for reducing the interrupt management difficulty, and improves the efficiency of accessing the FIFO storage space and the system real-time property. On the other hand, the situation of data loss is avoided because the interrupt is not required to be turned off or the software is not used for blocking.

Description

Interrupt system-based multi-task access method, processor and task access system
Technical Field
The invention relates to the technical field of computers, in particular to a multitask access method based on an interrupt system, a processor and a task access system.
Background
With the development of automatic driving technology, more and more application devices are used in automobiles, and a situation that a plurality of application devices use one shared device often occurs. In addition, the real-time requirement of the automobile on the software is higher and higher, so in order to simultaneously meet the two situations, the full interrupt system architecture is applied to the above scenario more and more frequently.
Generally, when a vehicle system is inputted with multiple tasks, the vehicle system needs to control the full interrupt system architecture to frequently switch between the on-interrupt mode and the off-interrupt mode, which results in that the CPU spends more time performing interrupt switching, thereby reducing the real-time performance of the processing system.
Disclosure of Invention
An object of the embodiments of the present invention is to provide an interrupt system-based multitask access method, a processor and a task access system, which are used to solve the technical defects existing in the prior art.
In a first aspect, an embodiment of the present invention provides a multitask access method based on an interrupt system, which is applied to a processor, and the method includes:
responding to an interrupt request sent by an interrupt system triggered by a target task, wherein an interrupt permission flag of the processor is kept in a non-off state;
determining the interrupt priority of the target task according to the interrupt request;
determining a target FIFO storage space according to the interrupt priority of the target task;
and accessing the target FIFO storage space under the condition of meeting the data loss prevention model.
Optionally, said accessing said target FIFO storage space under satisfaction of a data loss prevention model comprises:
determining a data throughput rate of the target FIFO storage space;
and under the condition of meeting the data loss prevention model, accessing the target FIFO storage space according to the data throughput rate.
Optionally, the accessing the target FIFO storage space according to the data throughput rate under the satisfaction of the data loss prevention model includes:
writing the data of the target task into the target FIFO memory space according to the data throughput rate under the condition of meeting the data loss prevention model, and/or,
and reading the data of the target task in the target FIFO storage space according to the data throughput rate under the condition of meeting the data loss prevention model.
Optionally, satisfying the data loss prevention model comprises:
Figure BDA0003251628900000021
wherein β (m) is a minimum interrupt time interval of the mth task, λ (i) is a time for the ith task to access the FIFO storage space with a maximum amount of data belonging to the ith task, η (m) is a system running time of the mth task, an interrupt priority of the mth task is lower than that of the (i + a) th task, a is (0, n-m)]Any integer of (1).
Optionally, the determining a target FIFO storage space according to the interrupt priority of the target task includes:
determining a target address space according to the interrupt priority of the target task;
and selecting the FIFO storage space mapped by the target address space as a target FIFO storage space.
Optionally, tasks of different interrupt priorities correspond to different FIFO memory spaces.
Optionally, tasks of the same interrupt priority share the same FIFO memory space.
Optionally, the determining, according to the interrupt request, the interrupt priority of the target task includes:
extracting an interrupt type code of the target task from the interrupt request;
and determining the interrupt priority corresponding to the interrupt type code of the target task.
In a second aspect, an embodiment of the present invention provides a processor, configured to execute the above-mentioned interrupt system-based multitask access method.
In a third aspect, an embodiment of the present invention provides a task access system, including:
interrupting the system;
the processor described above; and
and the memory, the processor and the interrupt system are connected through an address bus.
In the interrupt system-based multitask access method provided by the embodiment of the invention, an interrupt request sent by an interrupt system is triggered by a target task in response, wherein an interrupt permission flag of a processor is set to be in a non-turn-off state, the interrupt priority of the target task is determined according to the interrupt request, a target FIFO storage space is determined according to the interrupt priority of the target task, and the target FIFO storage space is accessed under the condition that a data loss prevention model is met. Therefore, on one hand, the embodiment can read/write the target FIFO storage space under the condition of not turning off the interrupt, avoid frequently switching between the on-interrupt mode and the off-interrupt mode, is favorable for reducing the interrupt management difficulty, and improves the efficiency of accessing the FIFO storage space and the system real-time property. On the other hand, the situation of data loss is avoided because the interrupt is not required to be turned off or the software is not used for blocking.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a schematic structural diagram of a task access system according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a state of a memory divided into a plurality of FIFO storage spaces according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a multitask access method based on an interrupt system according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of S33 shown in fig. 3;
fig. 5 is a schematic diagram illustrating a state of the memory divided into a plurality of FIFO storage spaces according to the embodiment of the present invention, wherein interrupt priorities of a 3 rd task, a 4 th task, and a 5 th task are the same;
fig. 6 is a schematic flow chart of S34 shown in fig. 3;
fig. 7 is a flowchart illustrating an interrupt operation mechanism according to an embodiment of the present invention;
fig. 8 is a flowchart illustrating an interrupt operation mechanism according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if not conflicted, the various features of the embodiments of the invention may be combined with each other within the scope of protection of the invention. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions in apparatus or flowcharts. The terms "first", "second", "third", and the like used in the present invention do not limit data and execution order, but distinguish the same items or similar items having substantially the same function and action.
The embodiment of the invention provides a task access system, wherein the task access system can be applied to any suitable application scene, such as an automatic driving vehicle-mounted system, a mobile phone, an unmanned aerial vehicle or aviation equipment.
Referring to fig. 1, the task access system includes an interrupt system 11, a processor 12 and a memory 13, and the memory 13, the processor 12 and the interrupt system 11 are connected by an address bus 14.
The interrupt system 11 is configured to electrically connect a plurality of external devices, and can simultaneously receive and process task requests issued by the plurality of external devices, as shown in fig. 1, the 0 th external device 150, the 1 st external device 151, the 2 nd external device 152, the 3 rd external device 153, the 4 th external device 154, the 5 th external device 155, and the 6 th external device 156 can all send task requests to the interrupt system 11, where interrupt priorities of tasks issued by the 0 th external device 150, the 1 st external device 151, the 2 nd external device 152, the 3 rd external device 153, the 4 th external device 154, the 5 th external device 155, and the 6 th external device 156 are sequentially incremented.
For example, the 2 nd external device 152 sends a task request to the interrupt system 11, and the interrupt system 11 latches the task request. Then, the interrupt system 11 determines whether the interrupt priority of the 2 nd task sent by the 2 nd external device 152 is higher than the interrupt priority of the current task according to the task request, if so, the interrupt system 11 sends an interrupt request to the processor 12, and the processor 12 executes corresponding logic according to the interrupt request. If not, the processor 12 continues to execute the current task. The current task is a task being executed by the processor 12, and may be a task of any one of the above external devices.
For another example, the current task is the 2 nd task sent by the 2 nd external device 152, the 3 rd external device 153, the 4 th external device 154, and the 5 th external device 155 simultaneously send task requests to the interrupt system 11, the interrupt system 11 arbitrates the task requests, and arbitrates the task request with the highest interrupt priority among the task requests, and since the interrupt priority of the 5 th task sent by the 5 th external device 155 is the highest, the interrupt system 11 encapsulates the interrupt type code of the 5 th task in the interrupt request, and sends the interrupt request to the processor 12. The processor 12 interrupts the execution of task 2 and switches to task 5 based on the interrupt type code of the interrupt request.
In some embodiments, the external device may be any type of electronic device, such as various sensors of an automobile, for example, a lidar, a camera, an infrared sensor, and the like.
The processor 12 is configured to write data of the task into the memory 13 or read data of the task from the memory 13 according to the task. During the interrupt process, the interrupt enable flag of the processor 12 is always kept in a non-off state, that is, in an interrupt enable state.
Generally, since a plurality of external devices share the same memory, when the plurality of external devices need to access the memory 12, for example, when data needs to be read/written from/to the memory 13, in order to ensure that the task of each external device can smoothly access the memory and avoid confusion of the data read/written, when the conventional method executes the current task, the interrupt permission flag of the processor 12 needs to be set to an off state, or a software blocking method is used to ensure that each current task can smoothly read/write data from the memory, thereby avoiding interruption of other tasks. In addition, after the current task is executed, the interrupt enable flag of the processor 12 needs to be set to a non-off state, and then the task with the high priority is selected for execution according to the interrupt priority of each task.
When designers design the bandwidth resources of the CPU, however, a fixed bandwidth is allocated to the memory, e.g., the total clock frequency of the processor is 1Ghz, wherein the fixed clock frequency given to the memory is 200Mhz, and further, since the memory is accessible to a plurality of external devices, in order to allow each external device to access the memory, avoid excessive skewing of the clock resources to a particular external device, therefore, the designer also needs to adaptively allocate 200Mhz to each external device, for example, the data throughput rate of task 0 is 80k/ms, the data throughput rate of task 1 is 40k/ms, the data throughput rate of task 2 is 30k/ms, the data throughput rate of task 3 is 20k/ms, the data throughput rate of task 4 is 15k/ms, the data throughput rate of task 5 is 10k/ms, and the data throughput rate of task 6 is 5 k/ms.
On the premise that the task of each external device is configured with a given data throughput rate, if the interrupt enable flag of the processor 12 is set to the off state during execution of each current task according to the conventional method to avoid interruption by a task with a high interrupt priority during execution, or a task with a high interrupt priority is blocked by using a software blocking method, this manner is prone to a situation of data loss, and is particularly prominent in a situation where the data volume of a certain task is large.
In addition, as mentioned above, in order to ensure that the next task can be executed, the conventional method needs to frequently switch between the on-interrupt mode and the off-interrupt mode in order to respond to the next task, which also increases the difficulty of interrupt management, and a solution is proposed herein, which is described in detail below.
The memory 13 is used for providing data writing or reading, and each external device can write data into the memory 13 or read data from the memory 13 under the control of the processor 12.
Referring to fig. 2, the memory 13 may be divided into a data buffer area, the data buffer area is divided into a plurality of FIFO storage spaces, for example, the data buffer area is divided into FIFO storage spaces corresponding to the number of external devices, as shown in fig. 2, the data buffer area includes a 0 th FIFO storage space, a 1 st FIFO storage space, a 2 nd FIFO storage space, a 3 rd FIFO storage space, a 4 th FIFO storage space, a 5 th FIFO storage space and a 6 th FIFO storage space, each of the FIFO storage spaces corresponds to an address space, for example, the address space of the 0 th FIFO storage space is 0000H-0010H, the address space of the 1 st FIFO storage space is 0011H-0100H, the address space of the 2 nd FIFO storage space is 0101H-0200H, the address space of the 3 rd FIFO storage space is 0201H-0300H, the address space of the 4 th FIFO storage space is 0301H-0400H, the address space of the 5 th FIFO memory space is 0401H-0500H, and the address space of the 6 th FIFO memory space is 0501H-0600H.
The 0 th task of the 0 th external device corresponds to the 0 th FIFO storage space, and subsequently, the data of the 0 th task can be written into the 0 th FIFO storage space or the data of the 0 th task can be read from the 0 th FIFO storage space.
Similarly, the 1 st task of the 1 st external device corresponds to the 1 st FIFO storage space, and subsequently, the data of the 1 st task may be written into the 1 st FIFO storage space, or the data of the 1 st task may be read from the 1 st FIFO storage space.
The 2 nd task of the 2 nd external device corresponds to the 2 nd FIFO storage space, and the data of the 2 nd task can be written into the 2 nd FIFO storage space subsequently, or the data of the 2 nd task can be read from the 2 nd FIFO storage space. The 3 rd task of the 3 rd external device corresponds to the 3 rd FIFO storage space, and subsequently, the data of the 3 rd task can be written into the 3 rd FIFO storage space, or the data of the 3 rd task can be read from the 3 rd FIFO storage space, and so on.
As shown in fig. 2, all FIFO memory spaces share the same data output port output.
It can be understood that, since the interrupt enable flag of the processor is maintained in a non-off state, even if the current task is the 1 st task and is interrupted by the 2 nd task as the visiting task, the data of the 2 nd task is stored in the 2 nd FIFO storage space, but not stored at the data interruption position of the 1 st task (i.e. the 1 st FIFO storage space), thereby ensuring that the data storage is not disordered and the data is easy to manage.
As another aspect of the embodiments of the present invention, an embodiment of the present invention provides a multitask access method based on an interrupt system. Referring to fig. 3, the interrupt system based multitask access method S300 includes:
s31, responding to an interrupt request sent by an interrupt system triggered by a target task, wherein an interrupt permission mark of the processor is kept in a non-off state;
by way of example and not limitation, the target task is a task having an interrupt priority that is the highest among interrupt priorities of the respective visiting tasks and is also higher than the interrupt priority of the current task, and the visiting task is a task that issues a task request to the interrupt system, for example, referring to fig. 1, the processor is executing the current task, wherein the current task is the 2 nd task issued by the 2 nd external device. At this time, the 0 th external device, the 1 st external device and the 3 rd external device simultaneously send task requests to the interrupt system, wherein the 0 th task, the 1 st task and the 3 rd task are visiting tasks.
The interrupt system performs priority arbitration according to each task request, and can arbitrate that the interrupt priority of the 3 rd external device is not only the highest among the interrupt priorities of the 0 th external device and the 1 st external device, but also that of the 3 rd external device is higher than that of the 2 nd external device, and therefore, the 3 rd task of the 3 rd external device is a target task. Task 3 then triggers the interrupt system to issue an interrupt request to the processor.
It is to be understood that, since the interrupt enable flag of the processor is maintained in a non-off state while the processor is executing the current task, the processor side can respond to the interrupt request of the 3 rd task. In addition, even after responding to the interrupt request of the 3 rd task, the interrupt enable flag of the processor is always kept in a non-off state. Subsequently, when executing the 3 rd task, assuming that the 4 th task as the visiting task triggers the interrupt system to send out the interrupt request again, the processor responds to the interrupt request, interrupts the execution of the 3 rd task, and shifts to the execution of the 4 th task.
S32, determining the interrupt priority of the target task according to the interrupt request;
in this embodiment, the task with high interrupt priority may cause the processor to interrupt the task with low interrupt priority, and each interrupt priority may be represented by an interrupt type code, for example, the interrupt type code of the 0 th task is "00H", and the interrupt type code "00H" corresponds to the 0 th interrupt priority. The interrupt type code of the 1 st task is "01H", and the interrupt type code "01H" corresponds to the 1 st interrupt priority. The interrupt type code of task 2 is "02H", and the interrupt type code "02H" corresponds to the 2 nd interrupt priority. The interrupt type code of the 3 rd task is '03H', and the interrupt type code '03H' corresponds to the 3 rd interrupt priority. The interrupt type code of the 4 th task is "04H", and the interrupt type code "04H" corresponds to the 4 th interrupt priority. The interrupt type code of task 5 is "05H", and the interrupt type code "05H" corresponds to the interrupt priority of task 5. The interrupt type code of task 6 is "06H", and the interrupt type code "06H" corresponds to the 6 th interrupt priority.
In some embodiments, the processor may extract the interrupt type code of the target task from the interrupt request, determine the interrupt priority corresponding to the interrupt type code of the target task, for example, the interrupt request is encapsulated with the interrupt type code "03H", the processor extracts the interrupt type code "03H" from the interrupt request, and determines the 3 rd interrupt priority according to the interrupt type code "03H".
S33, determining a target FIFO storage space according to the interrupt priority of the target task;
each interrupt priority corresponds to each FIFO storage space, for example, the 0 th interrupt priority corresponds to the 0 th FIFO storage space, the 1 st interrupt priority corresponds to the 1 st FIFO storage space, the 2 nd interrupt priority corresponds to the 2 nd FIFO storage space, the 3 rd interrupt priority corresponds to the 3 rd FIFO storage space, the 4 th interrupt priority corresponds to the 4 th FIFO storage space, the 5 th interrupt priority corresponds to the 5 th FIFO storage space, the 6 th interrupt priority corresponds to the 6 th FIFO storage space.
As previously described, since the interrupt priority of the target task is the 3 rd interrupt priority, the processor may determine that the 3 rd FIFO storage space is the target FIFO storage space.
And S34, under the condition of meeting the data loss prevention model, accessing the target FIFO storage space.
By way of example and not limitation, the data loss prevention model is a model that controls the FIFO storage space so that data is not lost while the interrupt enable flag remains in a non-off state.
According to the embodiment, under the condition that the data loss prevention model is met, data are written into or read from the target FIFO storage space, on one hand, the target FIFO storage space can be read/written under the condition that the interruption is not turned off, frequent switching between the on-interruption mode and the off-interruption mode is avoided, the interruption management difficulty is favorably reduced, and the efficiency of accessing the FIFO storage space and the system real-time performance are improved. On the other hand, the situation of data loss is avoided because the interrupt is not required to be turned off or the software is not used for blocking.
In some embodiments, when determining the target FIFO memory space, referring to fig. 4, S33 includes:
s331, determining a target address space according to the interrupt priority of the target task;
s332, selecting the FIFO storage space mapped by the target address space as the target FIFO storage space.
By way of example and not limitation, the address space is used to map the FIFO memory space, and the interrupt priority is mapped to the address space, so that the processor can determine the address space by the interrupt priority. For example, as mentioned above, the address space of the 0 th FIFO memory space is 0000H-0010H, which corresponds to the 0 th interrupt priority, so that the processor can determine the address space to be 0000H-0010H according to the 0 th interrupt priority, and then determine the FIFO memory space mapped by the address space "0000H-0010H".
Assuming that the 3 rd task is a target task and the address space corresponding to the 3 rd interrupt priority is 0201H-0300H, therefore, the address space is 0201H-0300H which can be used as the target address space, and the 3 rd FIFO memory space mapped by the target address space "0201H-0300H" is used as the target FIFO memory space.
In some embodiments, the tasks with different interrupt priorities correspond to different FIFO storage spaces, and as shown in fig. 2, the interrupt priorities of the 0 th task to the 6 th task are different and sequentially increased, wherein the 0 th task corresponds to the 0 th FIFO storage space, the 1 st task corresponds to the 1 st FIFO storage space, and so on.
When accessing each target FIFO space, for example, as mentioned above, assuming that the current task is task 2, when task 3 comes, the interrupt system takes task 3 as the target task, responds to the task request of the target task, and issues an interrupt request to the processor, since the interrupt enable flag of the processor is always kept in a non-off state, the processor responds to the interrupt request, and takes the FIFO storage space mapped by target address space "0201H-0300H" as the target FIFO storage space, that is, the 3 rd FIFO storage space as the target FIFO storage space, so that the processor writes the data of task 3 into the 3 rd FIFO storage space.
Then, when the 4 th task comes, the interrupt system takes the 4 th task as the target task, responds to the task request of the target task, and sends an interrupt request to the processor, because the interrupt permission flag of the processor is always kept in a non-off state, the processor responds to the interrupt request, interrupts the data of the 3 rd task to be executed to be written into the 3 rd FIFO storage space, and switches to take the FIFO storage space mapped by the target address space "0301H-0400H" as the target FIFO storage space, that is, the 4 th FIFO storage space as the target FIFO storage space, so that the processor writes the data of the 4 th task into the 4 th FIFO storage space.
Therefore, by adopting the method, the embodiment can read/write the target FIFO storage space under the condition of not turning off the interrupt, not only can reduce the difficulty of interrupt management, but also can efficiently manage the data of each task.
In some embodiments, tasks of the same interrupt priority share the same FIFO memory space, see fig. 5, with interrupt priorities of task 3, task 4, and task 5 being the same, and the three sharing FIFO memory space mapped by address space "0201H-0330H".
When accessing each target FIFO space, for example, as previously described, assuming that the current task is task 2, when task 3, task 4 and task 5 come, because the interrupt priorities of the 3 rd task, the 4 th task and the 5 th task are the same, the interrupt system determines the task with the earliest request time as the target task according to the request times of the 3 rd task, the 4 th task and the 5 th task, and the interrupt system sends an interrupt request to the processor by taking the 4 th task as the target task assuming that the request time of the 4 th task is the earliest, since the interrupt enable flag of the processor is always kept in a non-off state, the processor takes the FIFO memory space mapped by the target address space "0201H-0300H" as the target FIFO memory space, that is, the 4 th FIFO memory space as the target FIFO memory space in response to the interrupt request, and then the processor writes the data of the 4 th task into the 3 rd FIFO memory space.
In some embodiments, when the target FIFO memory space is accessed under the data loss prevention model is satisfied, referring to fig. 6, S34 includes:
s341, determining the data throughput rate of the target FIFO storage space;
and S342, under the condition of meeting the data loss prevention model, accessing the target FIFO storage space according to the data throughput rate.
As an example and not by way of limitation, the data throughput rate is a rate at which tasks read or write data from or into the corresponding FIFO storage space, and as described above, designers allocate a corresponding share of CPU bandwidth resources to each external device according to the service, and the tasks of each external device correspond to a certain data throughput rate, and each FIFO storage space corresponds to a corresponding data throughput rate since each task can only access a specified FIFO storage space, that is, each task accesses the FIFO storage space at a given data throughput rate, for example, the 0 th task writes data to the 0 th FIFO storage space at a data throughput rate of 80k/ms, and the 1 st task writes data to the 1 st FIFO storage space at a data throughput rate of 40 k/ms.
In some embodiments, in step S342, the processor writes the data of the target task into the target FIFO storage space according to the data throughput rate while satisfying the data loss prevention model, and/or reads the data of the target task from the target FIFO storage space according to the data throughput rate while satisfying the data loss prevention model.
In some embodiments, satisfying the data loss prevention model comprises:
Figure BDA0003251628900000111
wherein β (m) is a minimum interrupt time interval of the mth task, λ (i) is a time for the ith task to access the FIFO storage space with a maximum amount of data belonging to the ith task, η (m) is a system running time of the mth task, an interrupt priority of the mth task is lower than that of the (i + a) th task, a is (0, n-m)]Any integer of (1).
As will be described in detail below with reference to fig. 7, it is understood that the break operation mechanism shown in fig. 7 is a break operation mechanism when each task reads/writes the maximum amount of data to the memory at a single time, that is, the break operation mechanism shown in fig. 7 is a break operation mechanism in the situation where the memory is most congested.
For example, referring to fig. 7, when m is equal to 0, β (0) is the minimum interrupt time interval of the 0 th task, that is, the minimum interrupt time interval is the time difference between two adjacent single time points in the 0 th task, and the single time point is the time point when the task accesses the memory with the maximum data amount, for example, the 0 th task can write the maximum data amount of 100 bytes into the 0 th FIFO storage space at a single time, and the time difference between the first single time point 71 and the second single time point 72 is the minimum interrupt time interval β (0). Wherein, the 0 th task can write the maximum data amount of 100 bytes into the 0 th FIFO memory space at a time.
The processor is interrupted by the 1 st task after writing 50 bytes of data into the 0 th FIFO memory space. Wherein, the 1 st task can write the maximum data amount of 500 bytes into the 1 st FIFO storage space in a single time. The processor then proceeds to perform task 1.
After writing 250 bytes of data of task 1 into the 1 st FIFO memory space, it is interrupted by task 2. Wherein, the 2 nd task can write the maximum data amount of 2000 bytes into the 2 nd FIFO storage space in a single time. The processor then turns to perform task 2.
After the 1000 bytes of data of task 2 are written into the 2 nd FIFO memory space, it is interrupted by task 3. Wherein, the 3 rd task can write the maximum data size of 800 bytes into the 3 rd FIFO memory space in a single time. The processor then proceeds to perform task 3.
After writing 400 bytes of data of task 3 into the 3 rd FIFO memory space, it is interrupted by task 4. Wherein, the 4 th task can write the maximum data size of 600 bytes into the 4 th FIFO memory space at a time. The processor then proceeds to perform task 4.
After writing 300 bytes of data of the 4 th task into the 4 th FIFO memory space, it is interrupted by the 5 th task. Wherein, the 5 th task can write the maximum data amount of 200 bytes into the 5 th FIFO memory space in a single time. The processor then turns to perform task 5.
After writing the 100 bytes of data of the 5 th task into the 5 th FIFO memory space, it is interrupted by the 6 th task. Wherein, the 6 th task can write 50 bytes of maximum data amount to the 6 th FIFO memory space in a single time. The processor then proceeds to perform task 6.
After 50 bytes of data of the 6 th task are written into the 6 th FIFO memory space, the 6 th task is executed, the processor turns to execute the 5 th task, and the rest data of the 5 th task is written into the 5 th FIFO memory space.
Interrupted by task 5. Wherein, the 5 th task can write the maximum data amount of 200 bytes into the 5 th FIFO memory space in a single time. The processor then turns to perform task 5.
And after the residual data of the 5 th task are written into the 5 th FIFO storage space, the processor turns to execute the 4 th task and writes the residual data of the 4 th task into the 4 th FIFO storage space.
And after the residual data of the 4 th task are written into the 4 th FIFO storage space, the processor turns to execute the 3 rd task and writes the residual data of the 3 rd task into the 3 rd FIFO storage space.
And after the residual data of the 3 rd task are all written into the 3 rd FIFO storage space, the processor turns to execute the 2 nd task and writes the residual data of the 2 nd task into the 2 nd FIFO storage space.
And after the residual data of the 2 nd task are all written into the 2 nd FIFO storage space, the processor turns to execute the 1 st task and writes the residual data of the 1 st task into the 1 st FIFO storage space.
And after the residual data of the 1 st task are all written into the 1 st FIFO storage space, the processor turns to execute the 0 th task, and the residual data of the 0 th task are written into the 0 th FIFO storage space.
As can be seen from the above description, herein: the time when the maximum data amount of the 0 th task is written into the 0 th FIFO memory space is denoted as t0, and similarly, the time when the maximum data amount of the 1 st task, the 2 nd task, the 3 rd task, the 4 th task, the 5 th task and the 6 th task is written into the corresponding FIFO memory spaces is denoted as t1, t2, t3, t4, t5 and t6, respectively, and the system running time of the 0 th task is denoted as Δ t0, where the times t0, t1, t2, t3, t4, t5 and t6 are known because the data throughput rate of each task is given and known, and the system running time of the 0 th task is Δ t 0.
Since m is 0 and the interrupt priority of the 0 th task is smaller than the interrupt priority of the 1 st to 6 th tasks, the designer can design the minimum interrupt interval β (0) according to the following constraint conditions, that is: β (0) > t0+ t1+ t2+ t3+ t4+ t5+ t6+ Δ t 0.
Similarly, when m is equal to 1, the constraint condition described above is also satisfied, that is, the minimum interruption time interval β (1), β (1) > t1+ t2+ t3+ t4+ t5+ t6+ Δ t1 of the 1 st task.
When m is 2, the constraint condition described above is also satisfied, that is, the minimum interruption time interval β (2), β (2) > t2+ t3+ t4+ t5+ t6+ Δ t2 of the 2 nd task.
The same constraint is also satisfied when m is 3, i.e., the minimum interruption time interval β (3), β (3) > t3+ t4+ t5+ t6+ Δ t3 for the 3 rd task.
When m is 4, the constraint condition described above is also satisfied, that is, the minimum interruption time interval β (4), β (4) > t4+ t5+ t6+ Δ t4 of the 4 th task.
When m is 5, the above constraint is also satisfied, that is, the minimum interruption time interval β (5), β (5) > t5+ t6+ Δ t5 of the 5 th task.
The same constraint is also satisfied when m is 6, i.e., the minimum interruption time interval β (6), β (6) > t6+ Δ t6 for task 6.
Since the working mechanism is a mechanism when each task reads/writes the maximum data amount to the memory once, as long as the minimum interruption time interval of the corresponding task meets the constraint condition, the data can be reliably written or read into the FIFO storage space under the condition of not overflowing or covering.
In order to further understand the embodiment of the present invention, and referring to fig. 8 again, details of the interrupt system-based multitask access method provided in the embodiment of the present invention are described, and for improving the simplicity, the interrupt work mechanism when the 3 rd task, the 4 th task, and the 5 th task read/write the maximum data amount to the memory at a time is described below, it should be understood that fig. 8 illustrates only one possible implementation example, and should not be construed as only one example.
Referring to fig. 8, when m is 3, β (3) is the minimum interruption time interval of the 3 rd task, for example, the 3 rd task can write the maximum amount of data 100 bytes into the 3 rd FIFO storage space at a time, and the time difference between the first single time point 81 and the second single time point 82 is the minimum interruption time interval β (3).
The processor writes 100 bytes of the first single time point 81 into the 3 rd FIFO memory space, and is interrupted by the 4 th task after writing 50 bytes of data into the 3 rd FIFO memory space. Wherein, the 4 th task can write the maximum data size of 2000 bytes into the 4 th FIFO memory space in a single time.
The processor then proceeds to perform task 4. When writing 500 bytes of data of task 4 into the 4 th FIFO memory space, it is interrupted by task 5. Wherein, the 5 th task can write the maximum data amount of 10 bytes into the 5 th FIFO memory space at a time.
After 10 bytes of data of the 5 th task are written into the 5 th FIFO memory space, the processor turns to execute the 4 th task and writes the rest data of the 4 th task into the 4 th FIFO memory space. However, after writing 1000 bytes of data into the 4 th FIFO memory space, it is interrupted by the 5 th task.
After the 10 bytes of data of the second time of the 5 th task are all written into the 5 th FIFO memory space, the processor turns to execute the 4 th task and writes the rest data of the 4 th task into the 4 th FIFO memory space.
After the 500 bytes of data of the 4 th task are written into the 4 th FIFO memory space, the processor turns to execute the 3 rd task.
The remaining 50 bytes of data of task 3 are written into the 3 rd FIFO memory space before the second single time point 82, so far, task 3 reliably writes all the data of task 3 into the 3 rd FIFO memory space on the premise that the data loss prevention model is satisfied.
Generally, the on/off interruption of the conventional technology brings a certain amount of interruption blocking operations of different levels, and when the data volume is large, the conventional technology greatly improves the message delay and the message failure. However, in the embodiment, the switch interruption is not frequently used, so that the interruption congestion is preferably reduced, and after the interruption level is reasonably designed, the invalidity and the time delay of the message can be greatly reduced.
It should be noted that, in the foregoing embodiments, a certain order does not necessarily exist between the foregoing steps, and those skilled in the art can understand, according to the description of the embodiments of the present invention, that in different embodiments, the foregoing steps may have different execution orders, that is, may be executed in parallel, may also be executed interchangeably, and the like.
As another aspect of the embodiments of the present invention, an embodiment of the present invention provides a processor. The processor is used to execute the method for accessing shared resources of the multi-core processor set forth in the above embodiments, wherein the processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single chip microcomputer, an arm (acorn RISC machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. Also, a processor may be a microcontroller or a state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
As another aspect of the embodiments of the present invention, an embodiment of the present invention provides a storage medium, where the storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, so that the one or more processors may execute the shared resource access method of the multi-core processor in any of the above-mentioned method embodiments.
The storage medium, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the shared resource access method of the multi-core processor in the embodiments of the present invention. The processor executes various functional applications and data processing of the shared resource access device of the multi-core processor by running the nonvolatile software program, the instructions and the modules stored in the storage medium, that is, the shared resource access method of the multi-core processor provided by the above method embodiment and the functions of each module or unit of the above device embodiment are realized.
The storage medium includes high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the storage medium optionally includes memory located remotely from the processor, and such remote memory may be coupled to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in a storage medium and, when executed by one or more processors, perform a shared resource access method for a multi-core processor in any of the method embodiments described above.
Embodiments of the present invention further provide a computer program product including a computer program stored on a non-volatile computer-readable storage medium, the computer program including program instructions that, when executed by a processor, cause the processor to perform any one of the methods for shared resource access of a multicore processor.
The above-described embodiments of the apparatus or device are merely illustrative, wherein the unit modules described as separate parts may or may not be physically separate, and the parts displayed as module units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the above technical solutions substantially or contributing to the related art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A multitask access method based on an interrupt system is applied to a processor, and is characterized in that the method comprises the following steps:
responding to an interrupt request sent by an interrupt system triggered by a target task, wherein an interrupt permission flag of the processor is kept in a non-off state;
determining the interrupt priority of the target task according to the interrupt request;
determining a target FIFO storage space according to the interrupt priority of the target task;
and accessing the target FIFO storage space under the condition of meeting the data loss prevention model.
2. The method according to claim 1, wherein said accessing said target FIFO memory space while satisfying a data loss prevention model comprises:
determining a data throughput rate of the target FIFO storage space;
and under the condition of meeting the data loss prevention model, accessing the target FIFO storage space according to the data throughput rate.
3. The method of claim 2, wherein said accessing said target FIFO memory space according to said data throughput rate while satisfying a data loss prevention model comprises:
writing the data of the target task into the target FIFO memory space according to the data throughput rate under the condition of meeting the data loss prevention model, and/or,
and reading the data of the target task in the target FIFO storage space according to the data throughput rate under the condition of meeting the data loss prevention model.
4. The method of claim 1, wherein satisfying a data loss prevention model comprises:
Figure FDA0003251628890000011
wherein β (m) is a minimum interrupt time interval of the mth task, λ (i) is a time for the ith task to access the FIFO storage space with a maximum amount of data belonging to the ith task, η (m) is a system running time of the mth task, an interrupt priority of the mth task is lower than that of the (i + a) th task, a is (0, n-m)]Any integer of (1).
5. The method according to any of claims 1 to 4, wherein said determining a target FIFO storage space based on the interrupt priority of the target task comprises:
determining a target address space according to the interrupt priority of the target task;
and selecting the FIFO storage space mapped by the target address space as a target FIFO storage space.
6. The method of claim 5, wherein tasks of different interrupt priorities correspond to different FIFO storage spaces.
7. The method of claim 5, wherein tasks of the same interrupt priority share the same FIFO storage space.
8. The method of any of claims 1 to 4, wherein said determining an interrupt priority of the target task based on the interrupt request comprises:
extracting an interrupt type code of the target task from the interrupt request;
and determining the interrupt priority corresponding to the interrupt type code of the target task.
9. A processor arranged to perform the interrupt system based multitasking access method according to any one of claims 1 to 8.
10. A task access system, comprising:
interrupting the system;
the processor of claim 9; and
and the memory, the processor and the interrupt system are connected through an address bus.
CN202111047691.7A 2021-09-08 2021-09-08 Interrupt system-based multi-task access method, processor and task access system Pending CN113835855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111047691.7A CN113835855A (en) 2021-09-08 2021-09-08 Interrupt system-based multi-task access method, processor and task access system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111047691.7A CN113835855A (en) 2021-09-08 2021-09-08 Interrupt system-based multi-task access method, processor and task access system

Publications (1)

Publication Number Publication Date
CN113835855A true CN113835855A (en) 2021-12-24

Family

ID=78958676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111047691.7A Pending CN113835855A (en) 2021-09-08 2021-09-08 Interrupt system-based multi-task access method, processor and task access system

Country Status (1)

Country Link
CN (1) CN113835855A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2769899A1 (en) * 2011-03-02 2012-09-02 Research In Motion Limited Enhanced prioritising and unifying interrupt controller
CN103049323A (en) * 2012-12-31 2013-04-17 西安奇维科技股份有限公司 Multi-interrupt balance management method implemented in FPGA (field programmable gate array)
CN106681948A (en) * 2016-12-26 2017-05-17 深圳先进技术研究院 Logic control method and device of programmable logic device
CN111078605A (en) * 2019-12-10 2020-04-28 上海航天控制技术研究所 Comprehensive processing system for multi-communication interface interruption
CN112100090A (en) * 2020-09-16 2020-12-18 浪潮(北京)电子信息产业有限公司 Data access request processing method, device, medium and memory mapping controller
CN112749106A (en) * 2019-10-29 2021-05-04 西安奇维科技有限公司 FPGA-based interrupt management method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2769899A1 (en) * 2011-03-02 2012-09-02 Research In Motion Limited Enhanced prioritising and unifying interrupt controller
CN103049323A (en) * 2012-12-31 2013-04-17 西安奇维科技股份有限公司 Multi-interrupt balance management method implemented in FPGA (field programmable gate array)
CN106681948A (en) * 2016-12-26 2017-05-17 深圳先进技术研究院 Logic control method and device of programmable logic device
CN112749106A (en) * 2019-10-29 2021-05-04 西安奇维科技有限公司 FPGA-based interrupt management method
CN111078605A (en) * 2019-12-10 2020-04-28 上海航天控制技术研究所 Comprehensive processing system for multi-communication interface interruption
CN112100090A (en) * 2020-09-16 2020-12-18 浪潮(北京)电子信息产业有限公司 Data access request processing method, device, medium and memory mapping controller

Similar Documents

Publication Publication Date Title
US8935510B2 (en) System structuring method in multiprocessor system and switching execution environment by separating from or rejoining the primary execution environment
US20150134912A1 (en) Scheduler, multi-core processor system, and scheduling method
EP2377026B1 (en) Resolving contention between data bursts
US9063794B2 (en) Multi-threaded processor context switching with multi-level cache
CN109308220B (en) Shared resource allocation method and device
CN115033184A (en) Memory access processing device and method, processor, chip, board card and electronic equipment
US20120151103A1 (en) High Speed Memory Access in an Embedded System
CN113472690A (en) Service message processing method and device
CN117807000B (en) Channel bus arbitration circuit, acceleration device, method, system, device and medium
CN101504567B (en) CPU, CPU instruction system and method for reducing CPU power consumption
KR101915944B1 (en) A Method for processing client requests in a cluster system, a Method and an Apparatus for processing I/O according to the client requests
CN115562838A (en) Resource scheduling method and device, computer equipment and storage medium
US10705985B1 (en) Integrated circuit with rate limiting
US10169260B2 (en) Multiprocessor cache buffer management
CN112612728B (en) Cache management method, device and equipment
CN113835855A (en) Interrupt system-based multi-task access method, processor and task access system
US10740032B2 (en) Resource allocation for atomic data access requests
CN104052831A (en) Data transmission method and device based on queues and communication system
US9081630B2 (en) Hardware-implemented semaphore for resource access based on presence of a memory buffer in a memory pool
CN102473149B (en) Signal processing system, integrated circuit comprising buffer control logic and method therefor
EP3696674A1 (en) Triggered operations for collective communication
CN115004163A (en) Apparatus and method for managing packet transfers across a memory fabric physical layer interface
CN118012788B (en) Data processor, data processing method, electronic device, and storage medium
CN117331510B (en) Data migration method, device and equipment applied to NVMe controller
CN111314936B (en) Base station traffic prediction method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000 401, Building B1, Nanshan Zhiyuan, No. 1001, Xueyuan Avenue, Changyuan Community, Taoyuan Street, Nanshan District, Shenzhen, Guangdong

Applicant after: Shenzhen Saifang Technology Co.,Ltd.

Address before: 518000 room 701, building B1, Nanshan wisdom garden, 1001 Xueyuan Avenue, Changyuan community, Taoyuan Street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen Daotong Intelligent Automobile Co.,Ltd.

CB02 Change of applicant information