CN114780215A - Task scheduling method, device, equipment and storage medium - Google Patents

Task scheduling method, device, equipment and storage medium Download PDF

Info

Publication number
CN114780215A
CN114780215A CN202210388036.6A CN202210388036A CN114780215A CN 114780215 A CN114780215 A CN 114780215A CN 202210388036 A CN202210388036 A CN 202210388036A CN 114780215 A CN114780215 A CN 114780215A
Authority
CN
China
Prior art keywords
task
target
tasks
subtasks
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210388036.6A
Other languages
Chinese (zh)
Inventor
吴盼望
尹磊祖
费悦牧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210388036.6A priority Critical patent/CN114780215A/en
Publication of CN114780215A publication Critical patent/CN114780215A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/28DMA

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bus Control (AREA)

Abstract

The application discloses a task scheduling method, a device, equipment and a storage medium, which are applied to Direct Memory Access (DMA) equipment, and the method comprises the following steps: determining that at least two target tasks with the same priority exist in a plurality of tasks to be scheduled; dividing each target task of at least two target tasks into a plurality of subtasks; interleaving and sequencing a plurality of subtasks of at least two target tasks; and performing interleaved scheduling on the sequenced subtasks through a target data channel. Therefore, when at least two target tasks with the same priority are encountered, the at least two target tasks are divided into a plurality of subtasks, and the plurality of subtasks are executed in parallel, namely the parallel execution of the at least two target tasks is realized, and the overall processing efficiency of the at least two target tasks is improved. Meanwhile, compared with the prior art that the next task is started after the execution of one task is finished, the method and the device can start the next task in advance, and shorten the overall processing time delay of at least two target tasks.

Description

Task scheduling method, device, equipment and storage medium
Technical Field
The present application relates to computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for task scheduling.
Background
Direct Memory Access (DMA) is a fast data exchange mode that can complete data transmission (also called task scheduling) between, for example, an external device and a Memory without a Central Processing Unit (CPU) and without CPU intervention.
In the prior art, the situation that the priorities of tasks are the same is often encountered, and at the moment, only one task can be selected to be executed when the tasks are executed, so that the current task can execute the subsequent task after all data are moved. If the data volume moved by the task of the DMA is large, the subsequent task can be executed only by waiting for a long time, so that the overall processing efficiency of the task is reduced, and the overall processing time delay of the task is increased.
Disclosure of Invention
In order to solve the foregoing technical problems, it is desirable to provide a task scheduling method, apparatus, device, and storage medium.
The technical scheme of the application is realized as follows:
in a first aspect, a task scheduling method is provided, and is applied to a direct memory access DMA apparatus, and the method includes:
determining that at least two target tasks with the same priority exist in a plurality of tasks to be scheduled;
dividing a target task of the at least two target tasks into a plurality of subtasks;
performing interleaving sequencing on a plurality of subtasks of the at least two target tasks;
performing interleaving scheduling on the sequenced subtasks through a target data channel; and the target data channel is a data channel corresponding to the priority of the target task.
In a second aspect, a task scheduling apparatus is provided, which is applied to a direct memory access DMA device, and includes:
the priority arbitration unit is used for determining that at least two target tasks with the same priority exist in a plurality of tasks to be scheduled;
the task segmentation unit is used for segmenting a target task in the at least two target tasks into a plurality of subtasks; performing interleaving sequencing on a plurality of subtasks of the at least two target tasks;
the task scheduling unit is used for performing interleaved scheduling on the sequenced subtasks through the target data channel; and the target data channel is a data channel corresponding to the priority of the target task.
In a third aspect, a DMA apparatus is provided, including: a processor and a memory configured to store a computer program operable on the processor, wherein the processor is configured to perform the steps of the aforementioned method when executing the computer program.
In a fourth aspect, a computer-readable storage medium is provided, having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the aforementioned method.
The embodiment of the application discloses a task scheduling method, a task scheduling device, a task scheduling equipment and a task scheduling storage medium. Meanwhile, compared with the prior art that the next task is started after the execution of one task is finished, the method and the device can start the next task in advance, and shorten the overall processing time delay of at least two target tasks.
Drawings
FIG. 1 is a first flowchart of a task scheduling method according to an embodiment of the present application;
FIG. 2 is a second flowchart of a task scheduling method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a third flow of a task scheduling method in an embodiment of the present application;
FIG. 4 is a block diagram of a DMA device in an embodiment of the present application;
fig. 5 is a schematic diagram of a task segmentation workflow in the embodiment of the present application;
FIG. 6 is a schematic diagram of a component structure of a task scheduling device in an embodiment of the present application;
fig. 7 is a schematic diagram of a component structure of a direct memory access device in an embodiment of the present application.
Detailed Description
So that the manner in which the features and elements of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings.
It should be noted that DMA is a fast data exchange mode, which can complete direct data transmission between an external device (e.g. Hardware Accelerator (HWA)) and a Memory (e.g. shared Random Access Memory (SHRAM)) without passing through and interfering with a CPU, and the CPU can perform other tasks during the DMA data transmission. For the data transmission between the external device and the memory executed by the CPU, the acquired data needs to be copied from the external device to the buffer and then written to the memory, that is, the data is not directly transmitted, and the CPU cannot execute other tasks during the data transmission process. Therefore, compared with the indirect transmission of the data executed by the CPU, the direct transmission mode of the DMA data can improve the data transmission efficiency, and meanwhile, the occupation rate of CPU resources can be reduced to a great extent and system resources can be saved in the process of executing the data transmission by the DMA.
The task scheduling method provided by the embodiment of the application is applied to Direct Memory Access (DMA) equipment. The following is a detailed description of the task scheduling method.
Fig. 1 is a schematic first flow diagram of a task scheduling method in an embodiment of the present application, and as shown in fig. 1, the task scheduling method may specifically include:
step 101: determining that at least two target tasks with the same priority exist in the plurality of tasks to be scheduled.
Here, priority configuration is performed in advance for a plurality of tasks to be scheduled. When the task scheduling method is executed, the priority of each task is queried to determine whether at least two target tasks with the same priority exist in the multiple tasks to be scheduled, if so, the step 102 is continuously executed, and if not, the task scheduling is directly executed through a data channel corresponding to the priority of each task. Wherein, at least two target tasks with the same priority level can be understood as that the task scheduling time of at least two target tasks is the same.
Step 102: and dividing the target task of the at least two target tasks into a plurality of subtasks.
When the DMA equipment meets the condition of tasks with the same priority, one task is selected to be executed, and the other tasks are executed after the current task is executed. If the data volume of the task moved by the DMA device is large, other tasks can be executed only by waiting for a long time, the system processing efficiency is reduced, and the system time delay is increased. Therefore, in the embodiment of the present application, the task segmentation module is added to the DMA device, and each of the at least two target tasks is segmented into a plurality of sub-tasks by the task segmentation module, so as to enable the plurality of sub-tasks with the same priority to be executed in parallel.
Illustratively, in some embodiments, prior to performing step 102, the method further comprises: acquiring the blocking enabling information of each target task; and when the blocking enabling information representation of all the target tasks is determined to allow task segmentation, segmenting the target task in the at least two target tasks into a plurality of subtasks.
It should be noted that, if any one of the at least two target tasks does not perform task segmentation, it is necessary that at least one target task needs to continuously execute the task to execute the remaining subtasks, that is, parallel execution of all target tasks cannot be achieved. Therefore, in order to implement that all target tasks can be executed in parallel, before executing step 102, it is necessary to ensure that all target tasks are task-sliced. The method specifically judges whether the target task is subjected to task segmentation according to the blocking enabling information pre-configured for the target task. And when all the target tasks are determined to be subjected to task segmentation, each target task is segmented into a plurality of subtasks.
When it is determined that at least one of the target tasks does not perform task segmentation, illustratively, in some embodiments, all of the target tasks are ordered; and sequentially scheduling the sequenced target tasks through the target data channel.
Here, all the target tasks may be sorted according to the data amount of the target tasks. The number of the carbon atoms may be other than the above, and is not particularly limited.
That is to say, when it is determined that at least one of all the target tasks does not perform task division, task scheduling is performed on each target task in sequence according to the sorted target tasks.
Step 103: and performing interleaving sequencing on a plurality of subtasks of the at least two target tasks.
Because different priority tasks correspond to different data channels, at least two target tasks correspond to the same data channel, namely the target data channel. Therefore, the embodiment of the application can achieve the purpose of parallel interleaving execution of all target tasks through the target data channel by interleaving and sequencing the plurality of subtasks of all target tasks.
Illustratively, in some embodiments, the DMA device includes a task slicing buffer, where the task slicing buffer includes a plurality of sub-buffers, and different sub-buffers correspond to different data channels; step 103 may specifically include: and interleaving and storing the plurality of subtasks into the sub buffers corresponding to the target data channels based on a preset interleaving sequence.
That is, the multiple subtasks are interleaved and stored in the sub buffer according to a preset interleaving order, so that the interleaving ordering of the multiple subtasks is realized.
For a preset interleaving order, for example, if there are three target tasks with the same priority, that is, target task 1, target task 2, and target task 3, target task 1 is divided into subtask 10, subtask 11, and subtask 12, target task 2 is divided into subtask 20, subtask 21, and subtask 22, target task 3 is divided into subtask 30, subtask 31, and subtask 32, and target task 1, target task 2, and target task 3 are interleaved and ordered, for example, subtask 10, subtask 20, subtask 30, subtask 11, subtask 21, subtask 31, subtask 12, subtask 22, and subtask 32. Here, this sort of interleaving result is not limited, and the other interleaving results are also possible.
Illustratively, in some embodiments, the subtasks are fetched in a preset interleaving order. That is, the subtasks are directly and sequentially obtained according to the preset interleaving sequence without storing the subtasks, so that the interleaving sequencing of the subtasks is realized.
Step 104: performing interleaving scheduling on the sequenced subtasks through a target data channel; and the target data channel is a data channel corresponding to the priority of the target task.
Based on the above example, task scheduling is first performed on subtask 10 of target task 1 through the target data path, then task scheduling is performed on subtask 20 of target task 2 through the target data path, then task scheduling is performed on subtask 30 of target task 3 through the target data path, …, and finally task scheduling is completed on subtask 32 of target task 3 through the target data path. Namely, the ordered subtasks are subjected to parallel interleaving scheduling through the target data channel.
In some embodiments, step 104 may specifically include: acquiring subtasks from the sub-buffers based on the storage sequence of the subtasks in the sub-buffers; and performing task scheduling on the subtasks through the target data channel.
That is, according to the storage sequence of the subtasks, the subtasks are sequentially obtained from the subtask, and after the task scheduling of the current subtask is finished, the next subtask is obtained and the task scheduling is carried out on the next subtask until the task scheduling of the last subtask is finished.
Here, the execution subject of steps 101 to 104 may be a processor of the DMA apparatus.
By adopting the technical scheme, when at least two target tasks with the same priority are encountered, the at least two target tasks are divided into a plurality of subtasks, and the plurality of subtasks are executed in parallel, namely the parallel execution of the at least two target tasks is realized, and the overall processing efficiency of the at least two target tasks is improved. Meanwhile, compared with the prior art that the next task is started after the execution of one task is finished, the method and the device can start the next task in advance, and shorten the overall processing time delay of at least two target tasks.
Based on the foregoing embodiments, the present application provides a task scheduling method, and fig. 2 is a second schematic flow diagram of the task scheduling method in the embodiments of the present application, and is applied to a DMA device.
As shown in fig. 2, the task scheduling method may specifically include:
step 201: determining that at least two target tasks with the same priority exist in the plurality of tasks to be scheduled.
Step 202: and acquiring the block enabling information of each target task.
And the blocking enabling information is used for representing whether the target task is subjected to task segmentation.
Step 203: judging whether to perform task segmentation on all target tasks based on the blocking enabling information; if the fact that the representation of the blocking enabling information of all the target tasks allows task segmentation is determined, executing step 204; if it is determined that the representation of the blocking enabling information of the at least one target task does not allow task segmentation, step 207 is executed.
Step 204: and dividing the target task of the at least two target tasks into a plurality of subtasks.
Step 205: and interleaving and storing the plurality of subtasks into the sub buffers corresponding to the target data channels based on a preset interleaving sequence.
The target data channel is a data channel corresponding to the priority of the target task.
The DMA device comprises a task segmentation buffer, wherein the task segmentation buffer comprises a plurality of sub buffers, and different sub buffers correspond to different data channels.
Step 206: and performing interleaved scheduling on the subtasks through the target data channel according to the storage sequence of the subtasks in the subtank buffer.
Step 207: and sequencing and storing all the target tasks into the sub-buffers corresponding to the target data channels based on a preset arrangement sequence.
Step 208: and according to the target task storage sequence in the sub-buffers, sequentially scheduling the target tasks through the target data channels.
Based on the foregoing embodiments, embodiments of the present application specifically provide a task scheduling method for splitting a target task, and fig. 3 is a third flow diagram of the task scheduling method in the embodiments of the present application, and is applied to a DMA device.
As shown in fig. 3, the task scheduling method may specifically include:
step 301: determining that at least two target tasks with the same priority exist in a plurality of tasks to be scheduled.
Here, priority configuration is performed in advance for a plurality of tasks to be scheduled. When the task scheduling method is executed, whether at least two target tasks with the same priority exist in a plurality of tasks to be scheduled is determined by inquiring the priority of each task, if so, the step 302 is continuously executed, and if not, the task scheduling is directly executed through a data channel corresponding to the priority of each task. Wherein, at least two target tasks with the same priority level can be understood as that the task scheduling time of at least two target tasks is the same.
Step 302: and acquiring the task segmentation length and the first task data length of the target task.
The task segmentation length is the length for segmenting the target task. The first task data length is the data length of the target task.
Step 303: and segmenting the first task data length of the target task according to the task segmentation length of the target task to obtain a plurality of sections of second task data lengths of the target task.
For example, if the first task data length of the target task is 90 and the task segmentation length is 30, the target task is task segmented according to the task segmentation length, that is, the first task data length of the target task is segmented into 3 segments.
Step 304: and generating a plurality of sub-tasks after the target task is segmented based on the plurality of sections of the second task data lengths of the target task.
In some embodiments, step 304 may specifically include: acquiring a source address and a destination address of a target task for task scheduling; and generating the subtasks based on the source address and the destination address of the target task and the data length of the second task.
Illustratively, the first task data length of target task 1 is sliced into 3 segments. Generating a subtask 10 by using the source address 1 and the destination address 1 of the target task 1 and the data length of the first section of second task; a source address 1, a destination address 1 and a second section second task data length generation subtask 11 of the target task 1; the source address 1, the destination address 1 and the third segment of the second task data length generation subtask 12 of the target task 1.
Step 305: and interleaving and storing the plurality of subtasks into the sub buffers corresponding to the target data channels based on a preset interleaving sequence.
The target data channel is a data channel corresponding to the priority of the target task.
The DMA device comprises a task segmentation buffer, wherein the task segmentation buffer comprises a plurality of sub buffers, and different sub buffers correspond to different data channels.
Step 306: and performing interleaved scheduling on the subtasks through the target data channel according to the storage sequence of the subtasks in the subtank buffer.
In some embodiments, the subtasks include: the source address of the target task, the destination address of the target task and the data length of a second task; step 306 may specifically include: acquiring the subtasks according to the subtask storage sequence of the subtasks in the subtask buffer; acquiring subtask data corresponding to the second task data length from first equipment corresponding to the source address of the target task; and transmitting the subtask data to second equipment corresponding to the destination address of the target task through the target data channel.
Specifically, initial position information of task data transmitted by the first device is obtained, subtask data corresponding to the length of the second task data is obtained from the first device based on the initial position information, end position information of the subtask data is recorded, next, based on the last end position information, the next part of subtask data is continuously obtained, and so on. Here, the first device may be a hardware accelerator HWA and the second device may be a shared random access memory SHRAW.
By adopting the technical scheme, when at least two target tasks with the same priority are encountered, the at least two target tasks are divided into a plurality of subtasks, and the plurality of subtasks are executed in parallel, namely the parallel execution of the at least two target tasks is realized, and the overall processing efficiency of the at least two target tasks is improved. Meanwhile, compared with the prior art that the next task is started after the execution of one task is finished, the method and the device can start the next task in advance, and shorten the overall processing time delay of at least two target tasks.
Based on the foregoing embodiments, an embodiment of the present application provides a dma apparatus for performing task scheduling on tasks with the same priority, and fig. 4 is a schematic structural diagram of the dma apparatus in the embodiment of the present application.
As shown in fig. 4, the direct memory access DMA apparatus includes: a task queue buffer 40, a priority arbiter 41, a task segmentation module 42, and a data channel module 44, wherein the task segmentation module 42 includes a task segmentation buffer 43; the data channel module 44 includes channel 1, channel 2 …, channel n.
The method includes the steps that configuration information of each task in a plurality of tasks to be scheduled is configured in advance, and the configuration information of the tasks comprises a source address, a destination address, a first task data length, blocking enabling information, a task segmentation length and a priority (not shown in fig. 4).
The plurality of tasks to be scheduled are buffered in the task queue buffer 40, that is, the configuration information of the plurality of tasks to be scheduled is buffered in the task queue buffer 40.
The priority arbiter 41 is configured to obtain priorities of the multiple tasks to be scheduled from the task queue buffer 40, determine whether at least two target tasks with the same priority exist in the multiple tasks to be scheduled according to the priority of each task, and transmit the at least two target tasks with the same priority to the task segmentation module 42 if the at least two target tasks with the same priority exist; if not, the tasks are transmitted to the data channel module 44, and task scheduling is performed through the data channel corresponding to the priority of each task.
The task segmentation module 42 is configured to determine, according to the blocking enable information of each target task, that the blocking enable information of all target tasks represents when task segmentation is performed, segment the first task data length of each target task according to the task segmentation length of each target task, to obtain multiple segments of second task data lengths, and generate a sub-task after segmentation of each target task, and interleave and cache multiple sub-tasks of all target tasks to a sub-buffer corresponding to a target data channel in the task segmentation buffer 43.
The data path module 44 obtains the subtasks from the sub buffers, obtains subtask data corresponding to the second task data length from a first device (e.g., a hardware accelerator HWA) corresponding to the source address of the target task to which the subtasks belong, and transmits the subtask data to a second device (e.g., a shared random access memory SHRAM) corresponding to the destination address through a target data path (e.g., path 2).
It should be noted that, in the subsequent application, the number of DMA channels may be added at will, and the arbitration algorithm of the task segmentation module 42 and the size of the task segmentation buffer 43 may also be modified flexibly, so that the method has good versatility.
Based on the foregoing embodiments, the embodiments of the present application are specifically described with respect to a workflow of the task segmentation module 42, and the blocking enable information and the task segmentation length configured for a plurality of tasks to be scheduled are, in practical applications, a control bit field signal split _ valid is added to configuration information (Config Message, CFM) of a task to indicate whether to perform the block transmission, and if the block transmission is performed, the amount of the block data (i.e., the task segmentation length) is indicated according to a data bit field signal split _ size in the CFM.
Fig. 5 is a schematic diagram of a task segmentation workflow in the embodiment of the present application, and as shown in fig. 5, the task segmentation workflow may specifically include:
step 501: and receiving at least two target tasks with the same priority, and reading configuration information of all the target tasks.
Step 502: judging whether the split _ valid bit fields in the configuration information of all the target tasks are all effective or not; if both are effective, go to step 503; if there is at least one split _ valid bit field not valid, go to step 506.
Step 503: and segmenting all target tasks according to the split _ size in the configuration information to obtain a plurality of subtasks.
Step 504: and interleaving and storing the plurality of subtasks into the sub buffers corresponding to the target data channels based on a preset interleaving sequence.
The target data channel is a data channel corresponding to the priority of the target task.
The DMA device comprises a task segmentation buffer, the task segmentation buffer comprises a plurality of sub-buffers, and different sub-buffers correspond to different data channels.
Step 505: and performing interleaved scheduling on the subtasks through the target data channel according to the storage sequence of the subtasks in the subtank buffer.
Step 506: and sequencing and storing all the target tasks into the sub-buffers corresponding to the target data channels based on a preset arrangement sequence.
Step 507: and according to the target task storage sequence in the sub-buffers, sequentially scheduling the target tasks through the target data channels.
To implement the method according to the embodiment of the present application, based on the same inventive concept, an embodiment of the present application further provides a task scheduling apparatus, and fig. 6 is a schematic diagram of a structure of the task scheduling apparatus according to the embodiment of the present application, which is applied to a DMA device for direct memory access, as shown in fig. 6, where the task scheduling apparatus 60 includes:
a priority arbitration unit 601, configured to determine that at least two target tasks with the same priority exist in multiple tasks to be scheduled;
a task segmentation unit 602, configured to segment a target task of the at least two target tasks into a plurality of subtasks; performing interleaving sequencing on a plurality of subtasks of the at least two target tasks;
a task scheduling unit 603, configured to perform interleaved scheduling on the ordered subtasks through the target data channel; and the target data channel is a data channel corresponding to the priority of the target task.
By adopting the technical scheme, when at least two target tasks with the same priority are encountered, the at least two target tasks are divided into a plurality of subtasks, and the plurality of subtasks are executed in parallel, namely the parallel execution of the at least two target tasks is realized, and the overall processing efficiency of the at least two target tasks is improved. Meanwhile, compared with the prior art that the next task is started after the execution of one task is finished, the method and the device can start the next task in advance, and shorten the overall processing time delay of at least two target tasks.
In some embodiments, the task segmentation unit 602 is specifically configured to obtain the segmentation enabling information of each target task; and when the blocking enabling information representation of all the target tasks is determined to allow task segmentation, segmenting the target task in the at least two target tasks into a plurality of subtasks.
In some embodiments, the task segmentation unit 602 is specifically configured to determine that all target tasks are ordered when the representation of the segmentation enabling information of at least one target task does not allow task segmentation; the task scheduling unit 603 is further configured to sequentially perform task scheduling on the ordered target tasks through the target data channel.
In some embodiments, the task segmentation unit 602 is specifically further configured to obtain a task segmentation length and a first task data length of the target task; segmenting the first task data length of the target task according to the task segmentation length of the target task to obtain a plurality of sections of second task data lengths of the target task; and generating a plurality of sub-tasks after the target task is segmented based on the plurality of sections of the second task data lengths of the target task.
In some embodiments, the task segmentation unit 602 is further configured to specifically obtain a source address and a destination address of the target task for task scheduling; and generating the subtasks based on the source address and the destination address of the target task and the data length of the second task.
In some embodiments, the DMA device includes a task slicing buffer, where the task slicing buffer includes a plurality of sub-buffers, and different sub-buffers correspond to different data channels; the task segmentation unit 602 is specifically further configured to store the multiple subtasks into the sub buffers corresponding to the target data channels in an interleaved manner based on a preset interleaving order.
In some embodiments, the subtasks include: the source address of the target task, the destination address of the target task and the data length of a second task; the task scheduling unit 603 is specifically further configured to obtain subtask data corresponding to the length of the second task data from the first device corresponding to the source address of the target task; and transmitting the subtask data to second equipment corresponding to the destination address of the target task through the target data channel.
Fig. 7 is a schematic diagram of a constituent structure of a direct memory access device in an embodiment of the present application, and as shown in fig. 7, the direct memory access device 70 includes: a processor 701 and a memory 702 configured to store a computer program capable of running on the processor;
wherein the processor 701 is configured to perform the method steps in the previous embodiments when running the computer program.
Of course, in actual practice, the various components of the direct memory access device are coupled together by a bus system 703, as shown in FIG. 7. It is understood that the bus system 703 is used to enable communications among the components. The bus system 703 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are identified in fig. 7 as bus system 703.
In practical applications, the processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, and a microprocessor. It is understood that the electronic devices for implementing the above processor functions may be other devices, and the embodiments of the present application are not limited in particular.
The Memory may be a volatile Memory (volatile Memory), such as a Random-Access Memory (RAM); or a non-volatile Memory (non-volatile Memory), such as a Read-Only Memory (ROM), a flash Memory (flash Memory), a Hard Disk (HDD), or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor.
In an exemplary embodiment, the present application further provides a computer-readable storage medium for storing a computer program.
Optionally, the computer-readable storage medium may be applied to any one of the methods in the embodiments of the present application, and the computer program enables a computer to execute a corresponding process implemented by a processor in each method in the embodiments of the present application, which is not described herein again for brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit may be implemented in the form of hardware, or in the form of hardware plus a software functional unit. Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A task scheduling method applied to a Direct Memory Access (DMA) device is characterized by comprising the following steps:
determining that at least two target tasks with the same priority exist in a plurality of tasks to be scheduled;
dividing a target task of the at least two target tasks into a plurality of subtasks;
performing interleaving sequencing on a plurality of subtasks of the at least two target tasks;
performing interleaving scheduling on the sequenced subtasks through a target data channel; and the target data channel is a data channel corresponding to the priority of the target task.
2. The method of claim 1, wherein prior to said dividing a target task of said at least two target tasks into a plurality of subtasks, the method further comprises:
acquiring the blocking enabling information of each target task;
and when the blocking enabling information representation of all the target tasks is determined to allow task segmentation, segmenting the target task in the at least two target tasks into a plurality of subtasks.
3. The method of claim 2, further comprising:
when the blocking enabling information representation of at least one target task is determined to be not allowed to perform task segmentation, sequencing all target tasks;
and sequentially carrying out task scheduling on the sequenced target tasks through the target data channel.
4. The method of claim 1, wherein the dividing a target task of the at least two target tasks into a plurality of subtasks comprises:
acquiring a task segmentation length and a first task data length of the target task;
segmenting the first task data length of the target task according to the task segmentation length of the target task to obtain a plurality of sections of second task data lengths of the target task;
and generating a plurality of sub tasks after the target task is segmented based on the multiple sections of the second task data length of the target task.
5. The method according to claim 4, wherein the generating of the plurality of split subtasks of the target task based on the plurality of segments of the second task data length of the target task includes:
acquiring a source address and a destination address of the target task for task scheduling;
and generating the subtasks based on the source address and the destination address of the target task and the data length of the second task.
6. The method of claim 1, wherein the DMA device comprises a task slicing buffer, wherein the task slicing buffer comprises a plurality of sub-buffers, and wherein different sub-buffers correspond to different data channels;
the interleaving and sequencing the plurality of subtasks of the at least two target tasks comprises:
and interleaving and storing the plurality of subtasks into the sub buffers corresponding to the target data channels based on a preset interleaving sequence.
7. The method of claim 6, wherein the subtasks include: the source address of the target task, the destination address of the target task and the data length of a second task; the interleaving scheduling of the sequenced subtasks through the target data channel includes:
acquiring the subtasks according to the subtask storage sequence in the subtask;
acquiring subtask data corresponding to the length of the second task data from first equipment corresponding to the source address of the target task;
and transmitting the subtask data to second equipment corresponding to the destination address of the target task through the target data channel.
8. A task scheduling apparatus applied to a direct memory access DMA device, the apparatus comprising:
the priority arbitration unit is used for determining that at least two target tasks with the same priority exist in a plurality of tasks to be scheduled;
the task segmentation unit is used for segmenting a target task in the at least two target tasks into a plurality of subtasks; performing interleaving sequencing on a plurality of subtasks of the at least two target tasks;
the task scheduling unit is used for performing interleaved scheduling on the sequenced subtasks through the target data channel; and the target data channel is a data channel corresponding to the priority of the target task.
9. A DMA device, characterized in that the DMA device comprises: a processor and a memory configured to store a computer program capable of running on the processor,
wherein the processor is configured to perform the steps of the method of any one of claims 1 to 7 when running the computer program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202210388036.6A 2022-04-13 2022-04-13 Task scheduling method, device, equipment and storage medium Pending CN114780215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210388036.6A CN114780215A (en) 2022-04-13 2022-04-13 Task scheduling method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210388036.6A CN114780215A (en) 2022-04-13 2022-04-13 Task scheduling method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114780215A true CN114780215A (en) 2022-07-22

Family

ID=82429602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210388036.6A Pending CN114780215A (en) 2022-04-13 2022-04-13 Task scheduling method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114780215A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115964155A (en) * 2023-03-16 2023-04-14 燧原智能科技(成都)有限公司 On-chip data processing hardware, on-chip data processing method and AI platform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115964155A (en) * 2023-03-16 2023-04-14 燧原智能科技(成都)有限公司 On-chip data processing hardware, on-chip data processing method and AI platform
CN115964155B (en) * 2023-03-16 2023-05-30 燧原智能科技(成都)有限公司 On-chip data processing hardware, on-chip data processing method and AI platform

Similar Documents

Publication Publication Date Title
US11144323B2 (en) Independent mapping of threads
CN105511954B (en) Message processing method and device
TWI229806B (en) Method and system for data flow control of execution nodes of an adaptive computing engine (ACE)
US10033527B2 (en) Dual-mode processing of cryptographic operations
CN109684099B (en) Message processing method and device
CN113918101B (en) Method, system, equipment and storage medium for writing data cache
US20200210815A1 (en) Output method and apparatus for multiple neural network, server and computer readable storage medium
KR102594657B1 (en) Method and apparatus for implementing out-of-order resource allocation
US10534563B2 (en) Method and system for handling an asynchronous event request command in a solid-state drive
US11210127B2 (en) Method and apparatus for processing request
US20200409878A1 (en) Method, apparatus and computer program product for processing i/o request
CN111625180B (en) Data writing method and device and storage medium
CN114780215A (en) Task scheduling method, device, equipment and storage medium
CN110413210B (en) Method, apparatus and computer program product for processing data
CN114546606A (en) Nonvolatile memory storage module and operation method thereof
WO2024103927A1 (en) Job scheduling method and apparatus in hybrid deployment scenario, and electronic device
US9703614B2 (en) Managing a free list of resources to decrease control complexity and reduce power consumption
CN106933646B (en) Method and device for creating virtual machine
US20080147906A1 (en) DMA Transferring System, DMA Controller, and DMA Transferring Method
US20210141535A1 (en) Accelerating memory compression of a physically scattered buffer
CN113821157B (en) Local disk mounting method, device, equipment and storage medium
CN110764710B (en) Low-delay high-IOPS data access method and storage system
CN113535338A (en) Interaction method, system, storage medium and electronic device for data access
JP2005258509A (en) Storage device
CN107844405B (en) Log processing method and device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination