CN116028180A - Central processing unit and task processing method - Google Patents

Central processing unit and task processing method Download PDF

Info

Publication number
CN116028180A
CN116028180A CN202211737962.6A CN202211737962A CN116028180A CN 116028180 A CN116028180 A CN 116028180A CN 202211737962 A CN202211737962 A CN 202211737962A CN 116028180 A CN116028180 A CN 116028180A
Authority
CN
China
Prior art keywords
task
processing
memory
processing module
scheduler
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211737962.6A
Other languages
Chinese (zh)
Inventor
葛蕾
刘瑞楷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xinlianxin Intelligent Technology Co ltd
Original Assignee
Shanghai Xinlianxin Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xinlianxin Intelligent Technology Co ltd filed Critical Shanghai Xinlianxin Intelligent Technology Co ltd
Priority to CN202211737962.6A priority Critical patent/CN116028180A/en
Publication of CN116028180A publication Critical patent/CN116028180A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention relates to a Central Processing Unit (CPU) and a task processing method. Comprising the following steps: the system comprises a first control module, a scheduler and a pipeline processing module; the first control module is used for determining the preset processing time length of each task, and sending a switching instruction to the scheduler after determining that the processing time length of the pipeline processing module for processing the first task accords with the preset processing time length of the first task; the scheduler is used for determining the next task of the first task as a second task after receiving the switching instruction, and submitting the second processing condition of the second task to the pipeline processing module; the second processing condition is a processing condition of the second task obtained after the pipeline processing module processes the second task last time; and the pipeline processing module is used for processing the second task according to the second processing condition. The method can ensure the orderly switching of a plurality of tasks, improves the order of task processing, ensures that each task can be processed in time, and improves the efficiency of task processing.

Description

Central processing unit and task processing method
Technical Field
The embodiment of the invention relates to the technical field of processors, in particular to a central processing unit, a task processing method, a task processing device, computing equipment and a computer readable storage medium.
Background
A central processing unit (Central Processing Unit, CPU) often encounters a situation that a plurality of tasks need to be processed, and if the tasks are selected randomly for processing, the processing process of the tasks is disordered, and the processing efficiency is low.
In summary, the embodiments of the present application provide a CPU for improving the efficiency of task processing.
Disclosure of Invention
The embodiment of the invention provides a CPU (Central processing Unit) for improving the efficiency of task processing.
In a first aspect, an embodiment of the present invention provides a central processing unit CPU, including: the system comprises a first control module, a scheduler and a pipeline processing module;
the first control module is used for determining the preset processing time length of each task, and sending a switching instruction to the scheduler after determining that the processing time length of the pipeline processing module for processing the first task accords with the preset processing time length of the first task;
the scheduler is used for determining that the next task of the first task is a second task after receiving the switching instruction, and submitting a second processing condition of the second task to the pipeline processing module; the second processing condition is a processing condition of the second task obtained after the pipeline processing module processes the second task last time;
And the pipeline processing module is used for processing the second task according to the second processing condition.
In the technical scheme, a first control module and a scheduler are arranged in a CPU, the first control module is used for determining preset processing time length of each task, monitoring the processing time length of each task processed by the pipeline processing module, and sending a switching instruction to the scheduler after determining that the processing time length of the first task accords with the preset processing time length of the first task. The scheduler is used for determining that the next task of the first task is a second task after receiving the switching instruction, and submitting the second processing condition of the second task to the pipeline processing module so that the pipeline processing module can continue to process the second task on the basis of the second processing condition. Therefore, when the CPU needs to process a plurality of tasks, the ordered switching of the plurality of tasks can be ensured, the ordering of the task processing is improved, each task can be processed in time, and the task processing efficiency can be improved.
In some embodiments, the second processing instance includes a second task state of the second task; the scheduler is internally provided with a second control module and a first memory;
The first memory is used for storing task states of all tasks;
the second control module is used for:
after receiving the switching instruction, determining the next task of the first task as the second task;
acquiring a second task state of the second task from the first memory;
submitting the second task state to the pipeline processing module;
the pipeline processing module is specifically used for:
and processing the second task according to the second task state.
In the above technical solution, the scheduler includes a second control module and a first memory, where the first memory may store task states of tasks. The second control module may determine that a next task of the first task is a second task after receiving the switching instruction, acquire a second task state of the second task from task states of the tasks stored in the first memory, and submit the second task state to the pipeline processing module, so that the pipeline processing module processes the second task based on the second task state. In the scheme, the scheduler is made into one piece of hardware in the CPU, and the scheduling process is executed by the hardware scheduler, so that compared with scheduler software, the scheduling efficiency can be improved, and the task processing efficiency is improved. And if the scheduler is made into hardware, the CPU does not need to read the scheduler software to execute the scheduling process, so that the time and resources wasted by the CPU in running the scheduler software are reduced, and the power consumption is reduced.
In some embodiments, the second processing instance further includes a second task instruction for the second task; the first memory is also used for storing task instructions of each task;
after determining that the next task of the first task is the second task, the second control module is further configured to:
acquiring a second task instruction of the second task from the first memory;
submitting the second task instruction to the pipeline processing module;
the pipeline processing module is specifically used for:
and processing the second task according to the second task instruction and the second task state.
In some embodiments, the second control module is further configured to, prior to submitting the second task state and the second task instruction to the pipeline processing module:
the method comprises the steps of obtaining a first task state of the first task which is being processed by the pipeline processing module and a first task instruction for processing the first task, and storing the first task state and the first task instruction in the first memory.
In some embodiments, the scheduler further comprises a second memory; the second memory is used for storing a task queue of each task which needs to be processed by the pipeline processing module;
The scheduler is specifically configured to:
and determining that the next task of the first task is a second task through the second memory.
In some embodiments, the second control module is specifically configured to:
updating the second task state to a first CPU register of the pipeline processing module;
and updating the address of the storage space storing the second task instruction into a second CPU register of the pipeline processing module.
In a second aspect, an embodiment of the present invention further provides a task processing method, including:
the method comprises the steps that a first control module determines preset processing time length of each task, and after determining that the processing time length of a pipeline processing module for processing a first task accords with the preset processing time length of the first task, a switching instruction is sent to a scheduler;
after receiving the switching instruction, the scheduler determines that the next task of the first task is a second task, and submits a second processing condition of the second task to the pipeline processing module; the second processing condition is a processing condition of the second task obtained after the pipeline processing module processes the second task last time;
and the pipeline processing module processes the second task according to the second processing condition.
In some embodiments, the second processing instance includes a second task state of the second task; the scheduler is internally provided with a second control module and a first memory;
after receiving the switching instruction, the scheduler determines that the next task of the first task is a second task, submits a second processing condition of the second task to the pipeline processing module, and the method comprises the following steps:
after receiving the switching instruction, the second control module determines that the next task of the first task is the second task;
the second control module acquires a second task state of the second task from the first memory;
the second control module submits the second task state to the pipeline processing module;
the pipeline processing module processes the second task according to the second processing condition, and includes:
the pipeline processing module processes the second task according to the second task state.
In some embodiments, the second processing instance further includes a second task instruction for the second task; the first memory is also used for storing task instructions of each task;
after determining that the next task of the first task is the second task, the method further comprises:
The second control module acquires a second task instruction of the second task from the first memory;
the second control module submits the second task instruction to the pipeline processing module;
the pipeline processing module processes the second task according to the second processing condition, and includes:
and the pipeline processing module processes the second task according to the second task instruction and the second task state.
In some embodiments, before the second control module submits the second task state and the second task instruction to the pipeline processing module, further comprising:
the second control module acquires a first task state of the first task being processed by the pipeline processing module and a first task instruction for processing the first task, and stores the first task state and the first task instruction in the first memory.
In some embodiments, the scheduler further comprises a second memory; the second memory is used for storing a task queue of each task which needs to be processed by the pipeline processing module;
the scheduler determining that a next task to the first task is a second task, comprising:
The scheduler determines that a next task of the first task is a second task through the second memory.
In some embodiments, the second control module submitting the second task state to the pipeline processing module comprises:
updating the second task state to a first CPU register of the pipeline processing module;
the second control module submitting the second task instruction to the pipeline processing module, comprising:
the second control module updates an address of a memory space storing the second task instruction into a second CPU register of the pipeline processing module.
In a third aspect, embodiments of the present invention also provide a computing device, comprising:
a memory for storing a computer program;
and the processor is used for calling the computer program stored in the memory and executing the task processing method listed in any mode according to the obtained program.
In a fourth aspect, embodiments of the present invention further provide a computer-readable storage medium storing a computer-executable program for causing a computer to execute the task processing method set forth in any one of the above-described modes.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system architecture of a CPU according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system architecture of a CPU according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a system architecture of a CPU according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a system architecture of a CPU according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of a task processing method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a task processing device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
For purposes of clarity, embodiments and advantages of the present application, the following description will make clear and complete the exemplary embodiments of the present application, with reference to the accompanying drawings in the exemplary embodiments of the present application, it being apparent that the exemplary embodiments described are only some, but not all, of the examples of the present application.
Based on the exemplary embodiments described herein, all other embodiments that may be obtained by one of ordinary skill in the art without making any inventive effort are within the scope of the claims appended hereto. Furthermore, while the disclosure is presented in the context of an exemplary embodiment or embodiments, it should be appreciated that the various aspects of the disclosure may, separately, comprise a complete embodiment.
It should be noted that the brief description of the terms in the present application is only for convenience in understanding the embodiments described below, and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated (Unless otherwise indicated). It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The embodiment of the application provides a task processing method of a CPU. When one CPU needs to process a plurality of tasks, the CPU divides a time slice for each task. For example, when the CPU processes task a for a certain second, processes task B for the next second, and then processes task C … … for the next second, it is necessary to store the task state of the current task and obtain the task state of the next task, so that the CPU can process the next task based on the task state of the next task. For example, the time for processing each task by the CPU is 1s; the CPU is currently processing the task A, and after 1s of processing, the task A is not processed, and the task A is switched to the task B; the CPU needs to save the current task state of the task A, namely, the processing degree is high, so that the next time the task A is processed again, the task A can be processed next time on the basis of the current task state of the task A; the CPU also needs to acquire the task state of the task B, and then processes the task B on the basis of the task state of the task B, wherein the processing time is 1s.
Fig. 1 is a schematic system architecture diagram of a CPU according to an embodiment of the present application, where the CPU is configured to implement the task processing method described above. The CPU includes: the system comprises a first control module, a scheduler and a pipeline processing module.
The first control module is used for determining the preset processing time length of each task, and sending a switching instruction to the scheduler after determining that the processing time length of the pipeline processing module for processing the first task accords with the preset processing time length of the first task;
the scheduler is used for determining that the next task of the first task is a second task after receiving the switching instruction, and submitting a second processing condition of the second task to the pipeline processing module; the second processing condition is a processing condition of the second task obtained after the pipeline processing module processes the second task last time;
and the pipeline processing module is used for processing the second task according to the second processing condition.
In the technical scheme, a first control module and a scheduler are arranged in a CPU, the first control module is used for determining preset processing time length of each task, monitoring the processing time length of each task processed by the pipeline processing module, and sending a switching instruction to the scheduler after determining that the processing time length of the first task accords with the preset processing time length of the first task. The scheduler is used for determining that the next task of the first task is a second task after receiving the switching instruction, and submitting the second processing condition of the second task to the pipeline processing module so that the pipeline processing module can continue to process the second task on the basis of the second processing condition. Therefore, when the CPU needs to process a plurality of tasks, the ordered switching of the plurality of tasks can be ensured, the ordering of the task processing is improved, each task can be processed in time, and the task processing efficiency can be improved.
The first control module determines preset processing time length of each task in the following mode.
One possible way is to preset the preset processing time length of each task. For example, the CPU processes 5 tasks in total, and the preset processing time period of each task is 0.1s. When the first control module determines that the processing time length of the pipeline processing module for processing the first task reaches the preset processing time length corresponding to the task for 0.1s, the first control module sends a switching instruction to the scheduler no matter whether the task is processed or not; after receiving the switching instruction, the scheduler determines that the next task of the task is a second task; the previous processing condition for the second task is submitted to the pipeline processing module, so that the pipeline processing module processes the second task according to the processing condition of the second task, but does not continue to process the first task. Thus, the task switching is realized. Each module performs its own role to realize the orderly processing of the tasks.
Of course, the preset processing time period of each task may not be equal. For example, the preset processing time length of the task a is set to 0.1s, the preset processing time length of the task B is set to 0.2s, and the preset processing time lengths of the other 3 tasks are set to 0.3s. The above are merely examples.
In another possible implementation manner, an input interface is set for the user, and the user can flexibly change the preset processing duration of each task. For example, if the user determines that the task amount of the task a is large at a certain time, the preset processing time of the task a is adjusted to be longer, for example, from 0.1s to 0.2s.
In some embodiments, the scheduler may further obtain a first processing instance of the first task being processed by the pipeline processing module and store the first processing instance before submitting the second processing instance of the second task to the pipeline processing module. In this manner, the first processing instance of the first task may be committed to the pipeline processing module when the pipeline processing module is next caused to continue processing the first task. The pipeline processing module can then continue to process the first task in accordance with the first processing instance.
For example, the preset processing time period of each task is 0.1s. When the first control module determines that the processing time length of the pipeline processing module for processing the task A reaches the preset processing time length corresponding to the task A for 0.1s, the first control module sends a switching instruction to the scheduler no matter whether the task A is processed or not; after receiving the switching instruction, the scheduler determines that the next task of the task A is the task B; the scheduler stores the first processing instance of task a. For example, task A is to carry data of numbers 1 to 50, and the first processing is to carry data of numbers 1 to 30 and not carry data of numbers 31 to 50. The scheduler stores the first processing instance of task a in order to submit the first processing instance of task a to the pipeline processing module the next time the pipeline processing module is to process task a, then the pipeline processing module may continue to handle the data of numbers 31-50.
The dispatcher also acquires the second processing condition of the task B in the storage space of the dispatcher and submits the second processing condition of the task B to the pipeline processing module. For example, task B is to calculate revenue per month; the second processing situation, which is the processing situation after the last processing of task B, is that the income of 1-3 months is calculated, and the income of 4-12 months is not calculated. The pipeline processing module may continue to calculate 4-12 months of revenue based on the second processing scenario instead of continuing to perform task a for data handling.
Therefore, the orderly processing of the tasks is realized, and the efficiency of the task processing is improved.
In the above manner of task processing by the CPU, if the scheduler is a piece of software, the CPU performs the switching from processing the current task to processing the next task by executing the code in the scheduler piece of software.
In some application scenarios, although the number of tasks that the CPU needs to process is large, because each task is relatively simple and has a short running time, fewer hardware resources are set for the CPU, so that the hardware resources of the CPU are sufficient to support processing the tasks. However, to implement task switching, the CPU also needs to use scheduler software, and read codes in the scheduler software to implement task switching. If the code volume of the scheduler software is larger than that of a task, the time slice of the CPU occupied by the scheduler software is larger than that of the CPU occupied by the task, the resources of the CPU consumed by the scheduler software are more than those of the CPU consumed by the task, which can cause serious waste of the resources, and the use of the scheduler software can improve the overall power consumption.
For example, the hardware resources of the CPU of the microcontroller are small, for example, the CPU needs to process 5 tasks, each task occupies 0.1s of the CPU, i.e., task switching is performed every 0.1 s. And when the task is switched, the CPU calls the scheduler software, and the time slice of the CPU occupied by reading the code of the scheduler software to switch the task is 1s. In such a scenario, the hardware resources are sufficient if the CPU only processes 5 tasks; however, when task switching is performed, the time used by the CPU is very long and is 10 times longer than the time consumed for processing one task, a large amount of hardware resources are consumed, so that the hardware resources of the CPU are wasted on task switching, and then the hardware resources of the CPU may not be enough to support the operation of the whole microcontroller. In addition, the code volume of the scheduler software is large, so that the overall power consumption of the microcontroller can be improved; the time to run the scheduler software is also long, resulting in processing task delays.
Therefore, the scheduler can be made into hardware and placed in the CPU, so that the CPU divides a time slice to read codes in the scheduler software and run the codes, and the power consumption of the CPU is saved, namely the power consumption of the microcontroller is saved. And the processing speed of the hardware is faster than that of the software. Therefore, the task switching speed can be improved, and the task processing efficiency can be improved.
Fig. 2 shows a system architecture of a CPU, wherein a scheduler is hardware, and a second control module and a first memory are built in the scheduler.
The processing scenarios described above may include task states. The second processing instance may be a second task state of the second task.
The task state of each task is stored in the first memory, for example, the task states corresponding to the task a and the task B are stored in the first memory: task A is the data with the carrying number of 1-50, the task state corresponding to task A is the data with the carrying number of 1-30, and the data with the carrying number of 31-50 is not carried; and calculating the income of each month in the task B, wherein the corresponding task state of the task B is that the income of 1-3 months is calculated, and the income of 4-12 months is not calculated.
For example, when the pipeline processing module is processing the task a, the first control module sends a switching instruction to the second control module after determining that the processing duration of the pipeline processing module for processing the current task has reached the preset processing duration of the task. After receiving the switching instruction, the second control module determines that the next task of the first task is the second task; acquiring a first task state of the first task being processed by the pipeline processing module, and storing the first task state in the first memory; and acquiring a second task state of the second task from the first memory, and submitting the second task state to the pipeline processing module. For example, the second control module determines that the next task of the task a is the task B, and then obtains the task state of the task a from the pipeline processing module, that is, how far the task a is processed. For example, the task state of task A is that data of numbers 31 to 40 have been carried and data of numbers 41 to 50 have not been carried; the second control module stores the task state update of task a into the first memory (because the task state of task a stored in the first memory before that was the data of carried numbers 1-30, the data of numbers 31-50 was not carried). The second control module obtains the income of the task B in 1-3 months from the first memory, does not calculate the income of 4-12 months, and submits the task state of the task B to the pipeline processing module so that the pipeline processing module processes the task B based on the task state of the task B.
In the above technical solution, the scheduler includes a second control module and a first memory, where the first memory may store task states of tasks. The second control module may determine that a next task of the first task is a second task after receiving the switching instruction, acquire a second task state of the second task from task states of the tasks stored in the first memory, and submit the second task state to the pipeline processing module, so that the pipeline processing module processes the second task based on the second task state. In the scheme, the scheduler is made into one piece of hardware in the CPU, and the scheduling process is executed by the hardware scheduler, so that compared with scheduler software, the scheduling efficiency can be improved, and the task processing efficiency is improved. And if the scheduler is made into hardware, the CPU does not need to read the scheduler software to execute the scheduling process, so that the time and resources wasted by the CPU in running the scheduler software are reduced, and the power consumption is reduced.
In another possible implementation, the processing scenario described above may include task states and task instructions. The second processing instance may be a second task state and a second task instruction of the second task. The pipeline processing module executes tasks according to task instructions.
Therefore, the first memory in the scheduler configured as hardware stores not only the task state of each task but also the task instruction of each task.
For example, the first memory stores task states and task instructions corresponding to the task a and the task B, respectively: task A is the data with the carrying number of 1-50, the task state corresponding to task A is the data with the carrying number of 1-30, and the data with the carrying number of 31-50 is not carried; the task instruction corresponding to the task A is an instruction executed to the 10 th line; the task B is used for calculating the income of each month, the corresponding task state of the task B is that the income of 1-3 months is calculated, and the income of 4-12 months is not calculated; the task instruction corresponding to the task B is an instruction to the 11 th line.
For example, when the pipeline processing module is processing the task a, the first control module sends a switching instruction to the second control module after determining that the processing duration of the pipeline processing module for processing the current task has reached the preset processing duration of the task. After receiving the switching instruction, the second control module determines that the next task of the first task is the second task; acquiring a first task state of the first task being processed by the pipeline processing module and a first task instruction for processing the first task, and storing the first task state and the first task instruction in the first memory; and acquiring a second task state of the second task and a second task instruction for processing the second task from the first memory, and submitting the second task state and the second task instruction to the pipeline processing module.
For example, if the second control module determines that the next task of the task a is the task B, the task state and the task instruction of the task a are obtained from the pipeline processing module, that is, to what extent the task a is processed, and what instruction of the task a is used. For example, the task state of task A is that data of numbers 31 to 40 have been carried and data of numbers 41 to 50 have not been carried; the processing instruction of the task A is an instruction executed to the 20 th line; the second control module stores the task state and the processing instruction update of the task A into the first memory. The second control module acquires the task state and the processing instruction of the task B from the first memory. The task state of the task B is calculated income of 1-3 months, the income of 4-12 months is not calculated, and the processing instruction of the task B is an instruction executed to the 11 th row; the second control module submits the task state and the processing instruction of the task B to the pipeline processing module so that the pipeline processing module processes the task B based on the task state and the processing instruction of the task B. For example, the pipeline fetches instructions on line 12 and calculates 4 th month revenue using the instructions on line 12.
In some embodiments, FIG. 3 illustrates a system architecture diagram of one possible CPU. The scheduler is hardware, and a second control module, a first memory and a second memory are arranged in the scheduler.
The second memory stores a task queue for each task that needs to be processed by the pipeline processing module. The scheduler is specifically configured to: and determining that the next task of the first task is a second task through the second memory.
For example, if the current task queue in the second memory is task a, task B and task C, 3 tasks are circularly processed in sequence until a certain task is completely processed, and the task is deleted from the task queue. When the second control module of the scheduler receives the switching instruction sent by the first control module, the task which is currently being processed is determined to be the task A, the second control module obtains the next task of the task A from the second memory to be the task B, and then the second task is determined to be the task B. When the second control module of the scheduler receives the switching instruction sent by the first control module, it is determined that the task being processed is task B, the second control module obtains the next task of task B from the second memory to be task C, and then it is determined that the second task is task C. When the second control module of the scheduler receives the switching instruction sent by the first control module, it is determined that the task being processed is task C, the second control module obtains the next task of task C from the second memory to be task A, and then it is determined that the second task is task A. So 3 task loops are processed until a task is fully processed, then the task is deleted from the task queue. For example, task a is data of transfer numbers 1 to 50, and when all transfer ends, task a may be deleted from the second memory, so that only task B and task C remain in the task queue of the second memory to be circularly processed.
When the first control module determines that the task is to be newly added, the newly added task is stored in a second memory of the scheduler. For example, when the first control module is to add a task, the task D is stored in the second memory, so that the task a, the task B, the task C, and the task D are sequentially processed in a loop.
In order to better explain the embodiments of the present invention, the flow of the task processing will be described below in a specific implementation scenario. Fig. 4 shows a system architecture diagram of one possible CPU. The scheduler is hardware, and a second control module, a first memory and a second memory are arranged in the scheduler. The pipeline processing module includes a first CPU register and a second CPU register.
The first memory may be a process register cache or a shadow register. The embodiments of the present application are not limited in this regard. The first memory may store therein a task state of each task and an address of a memory space of task instructions for executing the task.
The pipeline processing module is used for acquiring a task state from the first CPU register, acquiring an address of a storage space where the task instruction is located from the second CPU register, acquiring a corresponding task instruction from the address, and executing the task instruction on the basis of the task state. For example, the address at which the task instruction is located is 1111 from the second CPU register, and the task instruction is obtained through this address, for example, the task instruction is a task instruction of line 10 of the entire task instruction of processing task a; the pipeline processing module obtains the task state from the first CPU register. The pipeline processing module executes the task instructions for the task state to complete data handling.
The pipeline processing module is not aware of which task is processed by itself, but only reads the task state from the first CPU register and reads the task instruction from the second CPU register to execute.
And after the first control module determines that the processing time of the pipeline processing module for processing the current task reaches the preset processing time of the task, a switching instruction is sent to the second control module. The switching instruction may be an interrupt or a timer trigger. The embodiments of the present application are not limited in this regard.
And after receiving the switching instruction, the second control module determines the next task of the first task as the second task in a task queue stored in a second memory. For example, the second control module determines that the task being processed by the pipeline processing module is task a, and determines that the next task to task a is task B in the second memory.
The second control module obtains a first task state of a first task currently being processed by the pipeline processing module from a first CPU register, and obtains a first address of a storage space where a first task instruction of the first task currently being processed by the pipeline processing module is located from a second CPU register; and updating the first task state of the first task and the first address corresponding to the first task instruction into the first memory.
The second control module acquires a second task state of a second task and a second address of a storage space where a second task instruction is located from the first memory; and updating the second task state of the second task into the first CPU register, and updating the address of the storage space storing the second task instruction into the second CPU register.
The pipeline processing module does not know that the task is switched, reads the address of the task instruction from the second CPU register, and acquires the task instruction from the address; the task state is read from the first CPU register, and the task instruction is executed on the basis of the task state. Thus, the pipeline processing module is already processing the second task.
The above mode realizes the orderly switching of tasks. The scheduling efficiency can be quickened and the task processing efficiency can be improved by adopting a hardware scheduler. And if the scheduler is made into hardware, the CPU does not need to read the scheduler software to execute the scheduling process, so that the time and resources wasted by the CPU in running the scheduler software are reduced, and the power consumption is reduced.
Based on the same technical concept, fig. 5 exemplarily shows a task processing method provided by an embodiment of the present invention. As shown in fig. 5, includes:
Step 501, a first control module determines preset processing time length of each task, and after determining that the processing time length of a pipeline processing module for processing a first task accords with the preset processing time length of the first task, a switching instruction is sent to a scheduler;
step 502, after receiving the switching instruction, the scheduler determines that a next task of the first task is a second task, and submits a second processing condition of the second task to the pipeline processing module; the second processing condition is a processing condition of the second task obtained after the pipeline processing module processes the second task last time;
in step 503, the pipeline processing module processes the second task according to the second processing situation.
In some embodiments, the second processing instance includes a second task state of the second task; the scheduler is internally provided with a second control module and a first memory;
after receiving the switching instruction, the scheduler determines that the next task of the first task is a second task, submits a second processing condition of the second task to the pipeline processing module, and the method comprises the following steps:
after receiving the switching instruction, the second control module determines that the next task of the first task is the second task;
The second control module acquires a second task state of the second task from the first memory;
the second control module submits the second task state to the pipeline processing module;
the pipeline processing module processes the second task according to the second processing condition, and includes:
the pipeline processing module processes the second task according to the second task state.
In some embodiments, the second processing instance further includes a second task instruction for the second task; the first memory is also used for storing task instructions of each task;
after determining that the next task of the first task is the second task, the method further comprises:
the second control module acquires a second task instruction of the second task from the first memory;
the second control module submits the second task instruction to the pipeline processing module;
the pipeline processing module processes the second task according to the second processing condition, and includes:
and the pipeline processing module processes the second task according to the second task instruction and the second task state.
In some embodiments, before the second control module submits the second task state and the second task instruction to the pipeline processing module, further comprising:
The second control module acquires a first task state of the first task being processed by the pipeline processing module and a first task instruction for processing the first task, and stores the first task state and the first task instruction in the first memory.
In some embodiments, the scheduler further comprises a second memory; the second memory is used for storing a task queue of each task which needs to be processed by the pipeline processing module;
the scheduler determining that a next task to the first task is a second task, comprising:
the scheduler determines that a next task of the first task is a second task through the second memory.
In some embodiments, the second control module submitting the second task state to the pipeline processing module comprises:
updating the second task state to a first CPU register of the pipeline processing module;
the second control module submitting the second task instruction to the pipeline processing module, comprising:
the second control module updates an address of a memory space storing the second task instruction into a second CPU register of the pipeline processing module.
Based on the same technical concept, fig. 6 exemplarily shows a task processing device provided by an embodiment of the present invention. As shown in fig. 6, includes:
the first control module 601 is configured to determine a preset processing duration of each task, and send a switching instruction to the scheduler after determining that a processing duration of the pipeline processing module for processing a first task meets the preset processing duration of the first task;
the scheduler 602 is configured to determine, after receiving the switch instruction, that a task next to the first task is a second task, and submit a second processing condition of the second task to the pipeline processing module; the second processing condition is a processing condition of the second task obtained after the pipeline processing module processes the second task last time;
the pipeline processing module 603 is configured to process the second task according to the second processing situation.
In some embodiments, the second processing instance includes a second task state of the second task; the scheduler 602 has a second control module and a first memory;
the first memory is used for storing task states of all tasks;
the second control module is used for:
After receiving the switching instruction, determining the next task of the first task as the second task;
acquiring a second task state of the second task from the first memory;
submitting the second task state to the pipeline processing module 603;
the pipeline processing module 603 is specifically configured to:
and processing the second task according to the second task state.
In some embodiments, the second processing instance further includes a second task instruction for the second task; the first memory is also used for storing task instructions of each task;
after determining that the next task of the first task is the second task, the second control module is further configured to:
acquiring a second task instruction of the second task from the first memory;
submitting the second task instruction to the pipeline processing module 603;
the pipeline processing module 603 is specifically configured to:
and processing the second task according to the second task instruction and the second task state.
In some embodiments, the second control module is further configured to, prior to submitting the second task state and the second task instruction to the pipeline processing module 603:
A first task state of the first task being processed by the pipeline processing module 603 and a first task instruction for processing the first task are acquired, and the first task state and the first task instruction are stored in the first memory.
In some embodiments, the scheduler 602 further includes a second memory; the second memory is used for storing a task queue of each task to be processed by the pipeline processing module 603;
the scheduler 602 is specifically configured to:
and determining that the next task of the first task is a second task through the second memory.
In some embodiments, the second control module is specifically configured to:
updating the second task state into a first CPU register of the pipeline processing module 603;
the address of the memory space storing the second task instruction is updated into the second CPU register of the pipeline processing module 603.
Based on the same technical concept, the embodiment of the present application provides a computer device, as shown in fig. 7, including at least one processor 701, and a memory 702 connected to the at least one processor, where a specific connection medium between the processor 701 and the memory 702 is not limited in the embodiment of the present application, and in fig. 7, the processor 701 and the memory 702 are connected by a bus, for example. The buses may be divided into address buses, data buses, control buses, etc.
In the embodiment of the present application, the memory 702 stores instructions executable by the at least one processor 701, and the at least one processor 701 may perform the steps of the task processing method by executing the instructions stored in the memory 702.
Where the processor 701 is a control center of a computer device, various interfaces and lines may be used to connect various portions of the computer device for task processing by executing or executing instructions stored in the memory 702 and invoking data stored in the memory 702. In some embodiments, the processor 701 may include one or more processing units, and the processor 701 may integrate an application processor and a modem processor, wherein the application processor primarily processes operating systems, user interfaces, application programs, and the like, and the modem processor primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701. In some embodiments, processor 701 and memory 702 may be implemented on the same chip, or they may be implemented separately on separate chips in some embodiments.
The processor 701 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
The memory 702 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 702 may include at least one type of storage medium, and may include, for example, flash Memory, hard disk, multimedia card, card Memory, random access Memory (Random Access Memory, RAM), static random access Memory (Static Random Access Memory, SRAM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), charged erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory), magnetic Memory, magnetic disk, optical disk, and the like. Memory 702 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 702 in the embodiments of the present application may also be circuitry or any other device capable of implementing a memory function for storing program instructions and/or data.
Based on the same technical concept, the embodiment of the present invention also provides a computer-readable storage medium storing a computer-executable program for causing a computer to execute the method of task processing listed in any of the above modes.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. A central processing unit CPU, comprising: the system comprises a first control module, a scheduler and a pipeline processing module;
the first control module is used for determining the preset processing time length of each task, and sending a switching instruction to the scheduler after determining that the processing time length of the pipeline processing module for processing the first task accords with the preset processing time length of the first task;
the scheduler is used for determining that the next task of the first task is a second task after receiving the switching instruction, and submitting a second processing condition of the second task to the pipeline processing module; the second processing condition is a processing condition of the second task obtained after the pipeline processing module processes the second task last time;
and the pipeline processing module is used for processing the second task according to the second processing condition.
2. The CPU of claim 1 wherein the second processing instance includes a second task state of the second task; the scheduler is internally provided with a second control module and a first memory;
the first memory is used for storing task states of all tasks;
the second control module is used for:
After receiving the switching instruction, determining the next task of the first task as the second task;
acquiring a second task state of the second task from the first memory;
submitting the second task state to the pipeline processing module;
the pipeline processing module is specifically used for:
and processing the second task according to the second task state.
3. The CPU of claim 2 wherein the second processing instance further includes a second task instruction for the second task; the first memory is also used for storing task instructions of each task;
after determining that the next task of the first task is the second task, the second control module is further configured to:
acquiring a second task instruction of the second task from the first memory;
submitting the second task instruction to the pipeline processing module;
the pipeline processing module is specifically used for:
and processing the second task according to the second task instruction and the second task state.
4. The CPU of claim 3 wherein the second control module is further configured to, prior to submitting the second task state and the second task instruction to the pipeline processing module:
The method comprises the steps of obtaining a first task state of the first task which is being processed by the pipeline processing module and a first task instruction for processing the first task, and storing the first task state and the first task instruction in the first memory.
5. The CPU of claim 1 wherein the scheduler further comprises a second memory; the second memory is used for storing a task queue of each task which needs to be processed by the pipeline processing module;
the scheduler is specifically configured to:
and determining that the next task of the first task is a second task through the second memory.
6. The CPU of claim 1 wherein the second control module is specifically configured to:
updating the second task state to a first CPU register of the pipeline processing module;
and updating the address of the storage space storing the second task instruction into a second CPU register of the pipeline processing module.
7. A method of task processing, comprising:
the method comprises the steps that a first control module determines preset processing time length of each task, and after determining that the processing time length of a pipeline processing module for processing a first task accords with the preset processing time length of the first task, a switching instruction is sent to a scheduler;
After receiving the switching instruction, the scheduler determines that the next task of the first task is a second task, and submits a second processing condition of the second task to the pipeline processing module; the second processing condition is a processing condition of the second task obtained after the pipeline processing module processes the second task last time;
and the pipeline processing module processes the second task according to the second processing condition.
8. The method of claim 7, wherein the second processing instance comprises a second task state of the second task; the scheduler is internally provided with a second control module and a first memory;
after receiving the switching instruction, the scheduler determines that the next task of the first task is a second task, submits a second processing condition of the second task to the pipeline processing module, and the method comprises the following steps:
after receiving the switching instruction, the second control module determines that the next task of the first task is the second task;
the second control module acquires a second task state of the second task from the first memory;
The second control module submits the second task state to the pipeline processing module.
9. A computing device, comprising:
a memory for storing a computer program;
a processor for invoking a computer program stored in said memory, performing the method according to any of claims 7 to 8 in accordance with the obtained program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer-executable program for causing a computer to execute the method of any one of claims 7 to 8.
CN202211737962.6A 2022-12-30 2022-12-30 Central processing unit and task processing method Pending CN116028180A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211737962.6A CN116028180A (en) 2022-12-30 2022-12-30 Central processing unit and task processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211737962.6A CN116028180A (en) 2022-12-30 2022-12-30 Central processing unit and task processing method

Publications (1)

Publication Number Publication Date
CN116028180A true CN116028180A (en) 2023-04-28

Family

ID=86070413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211737962.6A Pending CN116028180A (en) 2022-12-30 2022-12-30 Central processing unit and task processing method

Country Status (1)

Country Link
CN (1) CN116028180A (en)

Similar Documents

Publication Publication Date Title
KR101660659B1 (en) Executing subroutines in a multi-threaded processing system
JP5611756B2 (en) Program flow control
CN104123304A (en) Data-driven parallel sorting system and method
US20230084523A1 (en) Data Processing Method and Device, and Storage Medium
CN111552614A (en) Statistical method and device for CPU utilization rate
CN111078394A (en) GPU thread load balancing method and device
CN115033352A (en) Task scheduling method, device and equipment for multi-core processor and storage medium
CN112416606A (en) Task scheduling method and device and electronic equipment
CN112181522A (en) Data processing method and device and electronic equipment
EP1502182B1 (en) Automatic task distribution in scalable processors
CN101873257B (en) Method and system for receiving messages
CN109408118B (en) MHP heterogeneous multi-pipeline processor
CN109388429B (en) Task distribution method for MHP heterogeneous multi-pipeline processor
CN116028180A (en) Central processing unit and task processing method
CN113296788B (en) Instruction scheduling method, device, equipment and storage medium
CN115391011A (en) Method, device, apparatus, medium, and program for scheduling timing task
CN115543317A (en) Front-end page development method and device
CN112860597B (en) Neural network operation system, method, device and storage medium
RU2010140853A (en) SYSTEM AND METHOD OF DISTRIBUTED CALCULATIONS
CN110515718B (en) Batch task breakpoint continuous method, device, equipment and medium
US10503541B2 (en) System and method for handling dependencies in dynamic thread spawning for a multi-threading processor
CN112231018A (en) Method, computing device, and computer-readable storage medium for offloading data
CN112463327B (en) Method and device for quickly switching logic threads, CPU chip and server
CN113806025B (en) Data processing method, system, electronic device and storage medium
KR100639146B1 (en) Data processing system having a cartesian controller

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination