CN113360263B - Task processing method, system and related equipment - Google Patents

Task processing method, system and related equipment Download PDF

Info

Publication number
CN113360263B
CN113360263B CN202110635131.7A CN202110635131A CN113360263B CN 113360263 B CN113360263 B CN 113360263B CN 202110635131 A CN202110635131 A CN 202110635131A CN 113360263 B CN113360263 B CN 113360263B
Authority
CN
China
Prior art keywords
task
processed
tasks
wait
subtasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110635131.7A
Other languages
Chinese (zh)
Other versions
CN113360263A (en
Inventor
刘芳
李文国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Tianjin Co Ltd
Original Assignee
Spreadtrum Communications Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Tianjin Co Ltd filed Critical Spreadtrum Communications Tianjin Co Ltd
Priority to CN202110635131.7A priority Critical patent/CN113360263B/en
Publication of CN113360263A publication Critical patent/CN113360263A/en
Application granted granted Critical
Publication of CN113360263B publication Critical patent/CN113360263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Abstract

The present invention relates to the field of data processing, and in particular, to a method, a system, and a related device for task processing. Wherein, the method comprises the following steps: receiving a first task to be processed; the first task to be processed is disassembled into at least two subtasks, and the execution time of the first subtask with the longest execution time in the at least two subtasks is t 1 (ii) a According to the number k of the existing tasks to be processed in the task queue and the receiving interval T of two adjacent tasks to be processed Spacer Determining the waiting time T of the first task to be processed Wait for (ii) a If said T is Wait for Is greater than or equal to t 1 If yes, discarding the first task to be processed; if said T is Wait for Less than t 1 Adding the at least two subtasks of the first task to be processed to the task queue. By making a pair of T Wait for Is greater than or equal to t 1 The task to be processed is discarded, so that the waiting time of the task to be processed received later is reduced, and the real-time performance of task processing is improved.

Description

Task processing method, system and related equipment
[ technical field ] A
The present invention relates to the field of data processing, and in particular, to a method, a system, and a related device for processing a task.
[ background of the invention ]
The processor is the operation and control core of the computer system and is the final execution unit for information processing and program operation. And the processor generates a corresponding calculation task according to the content required to be calculated and processes the calculation task. With the increasing requirement for real-time performance of processor processing tasks, the traditional single-thread processing method cannot meet the requirement for real-time performance. Therefore, a multi-thread processing method is often used to process the task. In the multithread processing method, a plurality of unrelated tasks are simultaneously processed by using a plurality of threads. However, when the task is busy, the task issuing speed is often greater than the task processing speed, which causes that the previous task is not processed, the next task is issued, and the issued task needs to wait for the previous task to be processed. As the number of tasks increases, the waiting time of the tasks will gradually increase, thereby causing the tasks to be slower and the real-time performance of the tasks to be reduced.
[ summary of the invention ]
In order to solve the above problem, an embodiment of the present invention provides a task processing method, after receiving a first task to be processed, determining a waiting time of the first task to be processed according to the number of existing tasks to be processed in a task queue and a receiving interval of two adjacent tasks to be processed, and determining whether to discard the first task to be processed according to the waiting time.
In a first aspect, an embodiment of the present invention provides a task processing method, including:
receiving a first task to be processed;
the first task to be processed is disassembled into at least two subtasks, and the execution time of the first subtask with the longest execution time in the at least two subtasks is t 1
According to the number k of the existing tasks to be processed in the task queue and the receiving interval T of two adjacent tasks to be processed Spacer Determining the waiting time T of the first task to be processed Wait for
If said T is Wait for Is greater than or equal to t 1 If yes, discarding the first task to be processed;
if said T is Wait for Less than t 1 Adding the at least two subtasks of the first task to be processed to the task queue.
In the embodiment of the invention, a first task to be processed is firstly disassembled into a plurality of subtasks, the waiting time of the first task to be processed is determined according to the number of the tasks to be processed existing in a task queue and the receiving interval of two adjacent tasks to be processed, and when the waiting time of the first task to be processed is larger than the execution time of the subtask with the longest execution time, the first task to be processed is discarded, so that the situations of too slow task response, system blocking and the like caused by overlong waiting time of the tasks to be processed due to task stacking are avoided.
In a possible implementation, in addition to the first subtask, a sum t of execution times of remaining subtasks of the at least two subtasks of the first task to be processed is obtained 2 Less than t 1
In one possible implementation, the t 1 Greater than T Spacing(s)
In a possible implementation manner, existing k to-be-processed tasks in a task queue are all split into at least two subtasks, and the execution time of the subtask with the longest execution time of each to-be-processed task in the task queue is t 1
According to the number k of the existing tasks to be processed in the task queue and the receiving interval T of two adjacent tasks to be processed Spacing(s) Determining the waiting time T of the first task to be processed Wait for The method comprises the following steps:
according to the k and t 1 And said T Spacer Determining the waiting time T of the first task to be processed Wait for
In one possible implementation, after adding the at least two subtasks of the first to-be-processed task to the task queue, the method further includes:
if k is 0, executing the first subtask, and after the first subtask is executed, executing the rest subtasks of the first task to be processed and the subtask with the longest execution time in the second task to be processed in parallel, wherein the receiving time of the second task to be processed is after the first task to be processed and is separated by T Spacer
If k is greater than 0, wait for T Wait for And then, executing the first subtask and the rest subtasks except the subtask with the longest execution time in the third task to be processed in parallel, wherein the receiving time of the third task to be processed is before the first task to be processed and is separated by T Spacer
In a possible implementation, the waiting time T of the first task to be processed is determined Wait for Previously, the method further comprises:
determining whether the k is greater than a first threshold;
and if the k is larger than the first threshold value, discarding the first task to be processed and generating corresponding alarm information.
In one possible implementation, the method further includes:
and monitoring the interrupt flag bit information of the task queue, and discarding the existing task to be processed in the task queue when the interrupt flag bit information identifies that the task queue is abnormal.
In a second aspect, an embodiment of the present invention provides a task processing system, including:
the receiving module is used for receiving a first task to be processed;
a disassembling module, configured to disassemble the first to-be-processed task into at least two sub-tasks, where an execution time of a first sub-task with a longest execution time among the at least two sub-tasks is t 1
A processing module for receiving the number k of the tasks to be processed and the receiving interval T of two adjacent tasks to be processed Spacer Determining the waiting time T of the first task to be processed Wait for
A discarding module for if the T Wait for Is greater than or equal to t 1 If yes, discarding the first task to be processed;
the processing module is further used for judging if the T is Wait for Less than t 1 Adding the at least two subtasks of the first task to be processed to the task queue.
In a third aspect, an embodiment of the present invention provides an electronic device, including
At least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor being capable of executing the method of the first aspect when invoked by the processor.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect.
It should be understood that the second to fourth aspects of the embodiment of the present invention are consistent with the technical solution of the first aspect of the embodiment of the present invention, and the beneficial effects obtained by the aspects and the corresponding possible implementation manners are similar, and are not described again.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a task processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a task processing method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of another task processing method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating another task processing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a task processing system according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
[ detailed description ] embodiments
For better understanding of the technical solutions in the present specification, the following detailed description of the embodiments of the present invention is provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only a few embodiments of the present specification, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step are within the scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the specification. As used in the description of the invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In the embodiment of the invention, the real-time performance of the subsequent task processing is ensured by discarding the task to be processed with the waiting time larger than the execution time of the first subtask.
Fig. 1 is a flowchart of a task processing method according to an embodiment of the present invention. The task Processing method can be applied to a system of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Micro Processor Unit (MPU) or other Processing devices. As shown in fig. 1, the method includes:
step 101, receiving a first task to be processed. The first task to be processed may be any computing task. For example, in the video recording process, the processing of the image frame data of each frame can be regarded as a task to be processed. In some embodiments, the receiving interval between two adjacent tasks to be processed is a fixed value T Spacing(s) . For example, a task to be processed is received every 0.1 second.
102, splitting a first task to be processed into at least two subtasks, wherein the execution time of the first subtask with the longest execution time in the at least two subtasks is t 1 . The first task to be processed can be disassembled according to the processing mode of the first task to be processed. For example, in the first task to be processed, the CPU needs to perform calculation first to obtain an intermediate calculation result, and then the GPU performs graphics calculation according to the intermediate calculation result to obtain a final calculation result of the first task to be processed. The first task to be processed may be split into two subtasks, specifically, a part of the tasks executed in the CPU may be split into one subtask, and a part of the tasks executed in the GPU may be split into another subtask.
In some embodiments, the first task to be processed may also be split according to the processing steps of the first task to be processed. For example, the first to-be-processed task is disassembled to obtain a first subtask, a second subtask, and a third subtask. The first subtask, the second subtask and the third subtask have an incidence relation, namely the second subtask is in a blocked state before the first subtask is executed and completed. After the first subtask is completed, execution of the second subtask is started. Before the second subtask is executed, the third subtask is in a blocked state, and after the second subtask is executed, the third subtask is started to be executed. And the sum of the execution times of the second and third subtasks should be smaller than the execution time of the first subtask.
In a specific example, when the first to-be-processed task is only split into two sub-tasks, namely the first sub-task and the second sub-task, the first sub-task and the second sub-task are related tasks, that is, the execution sequence of the first sub-task is before the second sub-task, and the execution time of the first sub-task needs to be greater than the execution time of the second sub-task.
In a specific example, if each task to be processed is decomposed into sub-tasks with a number greater than two, in addition to the first sub-task, the sum t of the execution times of the remaining sub-tasks of the at least two sub-tasks of the first task to be processed 2 Need to be less than t 1
103, according to the number k of the tasks to be processed in the task queue and the receiving interval T of two adjacent tasks to be processed Spacer Determining the waiting time T of the first task to be processed Wait for . The task queue is a buffer space for recording the tasks to be processed, the tasks to be processed are sequenced by the task queue according to the time when the tasks to be processed are added into the task queue, and the processing threads are called according to the sequencing result to sequentially execute the tasks to be processed in the task queue. The receiving time of two adjacent tasks to be processed should be less than the executing time of the first subtask, i.e. t 1 >T Spacer Otherwise, the first subtask of the first task to be processed may be processed, but the next task is not yet added to the task queue, which results in wasted processing resources and prolongs the total processing time of the system. Specifically, the k existing tasks to be processed in the task queue are all split into at least two subtasks, and the execution time of the subtask with the longest execution time of each existing task to be processed in the task queue is t 1 Then can be according to k, t 1 And T Spacing(s) Determining the waiting time T of the first task to be processed Wait for
In one specific example, the multi-thread parallel processing can be performed on the existing tasks to be processed in the task queue. For example, each task is split into a first sub-task and a second sub-task, wherein the first sub-task has an execution time t 1 The execution time of the second subtask is t 2 And t is 1 >T Spacer >t 2 . If the first task to be processed (i-th task) is the first task in the queue, the first sub-task of the first task to be processed may be executed first when processing the first task to be processed in the task queue. Due to t 1 >T Spacer The (i + 1) th task is added into the task queue before the first subtask of the (i) th task completes execution, so that the (i + 1) th task needs to wait, and t 1 -T Spacer Time of (d). After the first subtask of the ith task is executed, the first subtask of the second subtask that is not associated with the ith task and the second subtask of the first subtask may be executed simultaneously, so as to hide the execution time of the second subtask of each task. When the (i + 2) th task is added into the task queue, the (i + 2) th task needs to wait for (i + 2-1) (t) from the time the (i + 2) th task is added into the task queue to the time the (i + 2) th task is executed 1 -T Spacing(s) ) Since i =1, the waiting time of the i +2 th task is 2 (t) 1 -T Spacing(s) ). And after the execution of the first subtask of the (i + 1) th task is finished, performing parallel processing on the second subtask of the (i + 1) th task and the first subtask of the (i + 2) th task by adopting two threads. Therefore, the execution time t of the first subtask can be determined according to the number k of the existing tasks in the queue 1 And T Spacing(s) To determine the waiting time of the first task to be processed as k (t) 1 -T Spacer )。
In some embodiments, the waiting time T of the first task to be processed is determined Wait for Before, it can also be determined whether the number k of the tasks to be processed in the task queue is greater than a first threshold, if k is greater than the first threshold, it indicates that the task queue is full,and when the system has faults to cause task stacking, discarding the first task to be processed and generating corresponding alarm information.
Step 104, if T Wait for Is greater than or equal to t 1 The first task to be processed is discarded. For example, in the process of recording, each frame of image frame data corresponds to a task to be processed, and discarding a frame of corresponding task to be processed within a certain time does not affect the user and the recording. Since the waiting time of each task in the queue increases with the number of tasks as the number of tasks in the queue increases by the above method of multithreaded parallel processing, T can be discarded Wait for Is greater than or equal to t 1 To reduce the latency of pending tasks subsequently added to the task queue. At this time, since the waiting time of the first to-be-processed task is greater than or equal to the execution time of the first sub-task, after the first to-be-processed task is discarded, the waiting time of the next to-be-processed task of the first to-be-processed task is from (k + 1) (t) 1 -T Spacing(s) ) Is reduced to (t) 1 -T Spacer )。
Step 105, if T is Wait for Less than t 1 And adding at least two subtasks of the first task to be processed into the task queue, and calling a specific thread from the task queue to execute the task to be processed in the task queue. After at least two subtasks of the first task to be processed are added to the task queue, if no other task exists in the queue, k is 0. The first sub-task of the first task to be processed can be executed immediately, and after the execution of the first sub-task is completed, the remaining sub-tasks of the first task to be processed and the sub-task with the longest execution time in the second task to be processed are executed in parallel, wherein the receiving time of the second task to be processed is after the first task to be processed and is separated by T Spacing(s) . As shown in fig. 2, the ith task is a first task to be processed, the (i + 1) th task is a second task to be processed, and the remaining subtasks of the ith task and the first subtask of the (i + 1) th task are executed in parallel, so that the remaining subtasks of each task to be processed except the first subtask are hiddenThe execution time of the transaction.
If there is a task in the task queue, i.e. k is greater than 0, wait for T Wait for And then, executing the first subtask of the first task to be processed and the rest subtasks except the subtask with the longest execution time in the third task to be processed in parallel. Wherein the receiving time of the third task to be processed is before the first task to be processed and is separated by T Spacer . As shown in fig. 3, the ith task in the drawing is the first task to be processed, and the (i-1) th task is the third task to be processed. The remaining subtasks of the (i-1) th task are therefore processed in parallel with the first task to be processed of the (i) th task.
In some embodiments, when the to-be-processed task is split into more than two sub-tasks, except for the first sub-task with the longest execution time, the sum of the execution times of the remaining sub-tasks may be greater than the execution time of the first sub-task, but only in a case that there are non-associated tasks between the remaining sub-tasks, and the individual execution time of each remaining sub-task is less than the execution time of the first sub-task. As shown in fig. 4, since there is no association between the second fourth sub-task of the ith task and the first sub-task of the (i + 1) th task, after the first sub-task of the ith task is executed, the remaining second to fourth sub-tasks of the ith task and the first sub-task of the (i + 1) th task may be executed in parallel.
In some embodiments, if the first to-be-processed task is the last task in the task queue, after the first subtask of the first to-be-processed task is executed, the remaining subtasks of the first to-be-processed task are executed directly. After determining that the execution of the remaining subtasks of the first task to be processed is completed, the current processing procedure may be ended.
In some embodiments, an interrupt flag bit of a task queue may also be set to identify an exception condition of the task queue. By monitoring the interrupt flag bit information of the task queue, when the interrupt flag bit information identifies that the task queue is abnormal, the existing tasks to be processed in the task queue can be discarded. For example, the task to be processed in the task queue may be an image frame data processing task in a video recording process, and when a video recording function is flashed off or a user manually exits the video recording function, the image frame data processing task that has been added to the task queue may be discarded.
Corresponding to the task processing method, an embodiment of the present invention provides a task processing system, as shown in fig. 5, where the system includes: a receiving module 501, a disassembling module 502, a processing module 503 and a discarding module 504.
The receiving module 501 is configured to receive a first task to be processed.
A disassembling module 502, configured to disassemble the first to-be-processed task into at least two sub-tasks, where an execution time of a first sub-task with a longest execution time among the at least two sub-tasks is t 1
A processing module 503, configured to obtain the number k of the existing tasks to be processed in the task queue and the receiving interval T between two adjacent tasks to be processed Spacer Determining the waiting time T of the first task to be processed Wait for
A discard module 504 for if T Wait for Is greater than or equal to t 1 The first task to be processed is discarded.
A processing module 503, further configured to if T Wait for Less than t 1 Then the at least two subtasks of the first task to be processed are added to the task queue.
The task processing system provided in the embodiment shown in fig. 5 may be used to implement the technical solutions of the method embodiments shown in fig. 1 to fig. 4 in this specification, and reference may be further made to the relevant descriptions in the method embodiments for implementation principles and technical effects.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 6, the electronic device may include at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the task processing method provided by the embodiments shown in fig. 1 to 4 in the present specification.
As shown in fig. 6, the electronic device is in the form of a general purpose computing device. Components of the electronic device may include, but are not limited to: one or more processors 610, a communication interface 620, and a memory 630, and a communication bus 640 that couples the various system components including the memory 630, the communication interface 620, and the processing unit 610.
Communication bus 640 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro Channel Architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic devices typically include a variety of computer system readable media. Such media may be any available media that is accessible by the electronic device and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 630 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) and/or cache Memory. The electronic device may further include other removable/non-removable, volatile/nonvolatile computer system storage media. Memory 630 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the specification.
A program/utility having a set (at least one) of program modules may be stored in memory 630, such program modules including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may include an implementation of a network environment. The program modules generally perform the functions and/or methodologies of the embodiments described herein.
The processor 610 executes various functional applications and data processing by executing programs stored in the memory 630, for example, implementing the task processing method provided by the embodiments shown in fig. 1 to 4 of the present specification.
The embodiment of the present specification provides a computer-readable storage medium, which stores computer instructions, and the computer instructions cause the computer to execute the task processing method provided by the embodiment shown in fig. 1 to 4 of the present specification.
The computer-readable storage medium described above may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc Read Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of this specification, "plurality" means at least two, e.g., two, three, etc., unless explicitly specified otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present description in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present description.
The word "if" as used herein may be interpreted as "at 8230; \8230;" or "when 8230; \8230;" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that, the apparatuses referred to in the embodiments of the present disclosure may include, but are not limited to, a Personal Computer (Personal Computer; hereinafter, PC), a Personal Digital Assistant (Personal Digital Assistant; hereinafter, PDA), a wireless handheld apparatus, a Tablet Computer (Tablet Computer), a mobile phone, an MP3 display, an MP4 display, and the like.
In the several embodiments provided in this specification, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present specification may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a connector, or a network device) or a Processor (Processor) to execute some steps of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A method for processing a task, comprising:
receiving a first task to be processed;
the first task to be processed is disassembled into at least two subtasks, and the execution time of the first subtask with the longest execution time in the at least two subtasks is t 1
According to the number k of the existing tasks to be processed in the task queue and the receiving interval T of two adjacent tasks to be processed Spacer Determining the waiting time T of the first task to be processed Wait for
If said T is Wait for Is greater than or equal to t 1 If yes, discarding the first task to be processed;
if said T is Wait for Less than t 1 Adding the at least two subtasks of the first task to be processed to the task queue;
the existing k to-be-processed tasks in the task queue are all split into at least two subtasks, and the execution time of the subtask with the longest execution time of each to-be-processed task in the task queue is t 1
According to the number k of the existing tasks to be processed in the task queue and the receiving interval T of two adjacent tasks to be processed Spacer Determining the waiting time T of the first task to be processed Wait for The method comprises the following steps:
according to said k、t 1 And said T Spacer Determining the waiting time T of the first task to be processed Wait for Is k (t) 1 -T Spacing(s) )。
2. Method according to claim 1, characterized in that, in addition to said first subtask, the sum t of the execution times of the remaining subtasks of said at least two subtasks of the first task to be processed is 2 Less than t 1
3. The method of claim 1, wherein t is 1 Greater than T Spacing(s)
4. The method according to claim 1, wherein the existing k to-be-processed tasks in the task queue are all split into at least two subtasks, and the execution time of the subtask with the longest execution time of each existing to-be-processed task in the task queue is t 1
According to the number k of the existing tasks to be processed in the task queue and the receiving interval T of two adjacent tasks to be processed Spacing(s) Determining the waiting time T of the first task to be processed Wait for The method comprises the following steps:
according to the k and t 1 And said T Spacing(s) Determining the waiting time T of the first task to be processed Wait for
5. The method of claim 4, wherein after adding the at least two subtasks of the first to-be-processed task to the task queue, the method further comprises:
if k is 0, executing the first subtask, and after the first subtask is executed, executing the rest subtasks of the first task to be processed and the subtask with the longest execution time in the second task to be processed in parallel, wherein the receiving time of the second task to be processed is after the first task to be processed and is separated by T Spacer
If it is notIf k is greater than 0, waiting for T Wait for And then, executing the first subtask and the rest subtasks except the subtask with the longest execution time in the third task to be processed in parallel, wherein the receiving time of the third task to be processed is before the first task to be processed and is separated by T Spacer
6. Method according to claim 1, characterized in that the waiting time T of the first task to be processed is determined Wait for Previously, the method further comprises:
determining whether k is greater than a first threshold;
and if the k is larger than the first threshold value, discarding the first task to be processed and generating corresponding alarm information.
7. The method according to any one of claims 1 to 6, further comprising:
and monitoring the interrupt flag bit information of the task queue, and discarding the existing tasks to be processed in the task queue when the interrupt flag bit information identifies that the task queue is abnormal.
8. A task processing system, comprising:
the receiving module is used for receiving a first task to be processed;
a disassembling module, configured to disassemble the first to-be-processed task into at least two sub-tasks, where an execution time of a first sub-task with a longest execution time among the at least two sub-tasks is t 1
A processing module for receiving the number k of the tasks to be processed and the receiving interval T of two adjacent tasks to be processed Spacer Determining the waiting time T of the first task to be processed Wait for
A discarding module for if the T Wait for Is greater than or equal to t 1 If yes, discarding the first task to be processed;
the processing module is also used forIf said T is Wait for Less than t 1 Adding the at least two subtasks of the first task to be processed to the task queue;
the existing k to-be-processed tasks in the task queue are all split into at least two subtasks, and the execution time of the subtask with the longest execution time of each to-be-processed task in the task queue is t 1
The processing module is specifically configured to:
according to the k and t 1 And said T Spacing(s) Determining the waiting time T of the first task to be processed Wait for Is k (t) 1 -T Spacing(s) )。
9. An electronic device, comprising:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor being capable of invoking the program instructions to perform the method of any of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202110635131.7A 2021-06-08 2021-06-08 Task processing method, system and related equipment Active CN113360263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110635131.7A CN113360263B (en) 2021-06-08 2021-06-08 Task processing method, system and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110635131.7A CN113360263B (en) 2021-06-08 2021-06-08 Task processing method, system and related equipment

Publications (2)

Publication Number Publication Date
CN113360263A CN113360263A (en) 2021-09-07
CN113360263B true CN113360263B (en) 2023-01-31

Family

ID=77533032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110635131.7A Active CN113360263B (en) 2021-06-08 2021-06-08 Task processing method, system and related equipment

Country Status (1)

Country Link
CN (1) CN113360263B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113961328B (en) * 2021-10-26 2022-07-19 深圳大学 Task processing method and device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107315629A (en) * 2017-06-14 2017-11-03 北京小米移动软件有限公司 Task processing method, device and storage medium
CN109308214A (en) * 2017-07-27 2019-02-05 北京京东尚科信息技术有限公司 Data task processing method and system
CN110196761A (en) * 2019-04-15 2019-09-03 北京达佳互联信息技术有限公司 Delay task processing method and processing device
CN111190590A (en) * 2020-01-07 2020-05-22 广州虎牙科技有限公司 Catton optimization method, device, terminal and computer readable storage medium
CN112463370A (en) * 2020-11-20 2021-03-09 深圳市雷鸟网络传媒有限公司 Task execution method, device and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107315629A (en) * 2017-06-14 2017-11-03 北京小米移动软件有限公司 Task processing method, device and storage medium
CN109308214A (en) * 2017-07-27 2019-02-05 北京京东尚科信息技术有限公司 Data task processing method and system
CN110196761A (en) * 2019-04-15 2019-09-03 北京达佳互联信息技术有限公司 Delay task processing method and processing device
CN111190590A (en) * 2020-01-07 2020-05-22 广州虎牙科技有限公司 Catton optimization method, device, terminal and computer readable storage medium
CN112463370A (en) * 2020-11-20 2021-03-09 深圳市雷鸟网络传媒有限公司 Task execution method, device and readable storage medium

Also Published As

Publication number Publication date
CN113360263A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
US9928124B2 (en) Reverting tightly coupled threads in an over-scheduled system
US9207943B2 (en) Real time multithreaded scheduler and scheduling method
US20230185607A1 (en) Hardware accelerated dynamic work creation on a graphics processing unit
US8963933B2 (en) Method for urgency-based preemption of a process
US10552213B2 (en) Thread pool and task queuing method and system
JP5673672B2 (en) Multi-core processor system, control program, and control method
US20120284720A1 (en) Hardware assisted scheduling in computer system
US8621479B2 (en) System and method for selecting task allocation method based on load balancing and core affinity metrics
CN109840149B (en) Task scheduling method, device, equipment and storage medium
CN113360263B (en) Task processing method, system and related equipment
CN115269196A (en) Thread pool dynamic creation method, device, equipment and storage medium
CN114461365A (en) Process scheduling processing method, device, equipment and storage medium
WO2020026010A2 (en) Task execution control method, device, equipment/terminal/server and storage medium
US20080313652A1 (en) Notifying user mode scheduler of blocking events
EP2546744B1 (en) Software control device, software control method, and software control program
CN113495780A (en) Task scheduling method and device, storage medium and electronic equipment
EP3591518B1 (en) Processor and instruction scheduling method
CN114359017B (en) Multimedia resource processing method and device and electronic equipment
CN116795503A (en) Task scheduling method, task scheduling device, graphic processor and electronic equipment
CN113407325A (en) Video rendering method and device, computer equipment and storage medium
CN112114967B (en) GPU resource reservation method based on service priority
US7702836B2 (en) Parallel processing device and exclusive control method
CN109669780B (en) Video analysis method and system
CN112783626A (en) Interrupt processing method and device, electronic equipment and storage medium
CN113626348A (en) Service execution method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant