CN112925616A - Task allocation method and device, storage medium and electronic equipment - Google Patents

Task allocation method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112925616A
CN112925616A CN201911244499.XA CN201911244499A CN112925616A CN 112925616 A CN112925616 A CN 112925616A CN 201911244499 A CN201911244499 A CN 201911244499A CN 112925616 A CN112925616 A CN 112925616A
Authority
CN
China
Prior art keywords
priority
cpu
target
task
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911244499.XA
Other languages
Chinese (zh)
Inventor
崔晓刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911244499.XA priority Critical patent/CN112925616A/en
Publication of CN112925616A publication Critical patent/CN112925616A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Abstract

The embodiment of the application discloses a task allocation method, a task allocation device, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring the priority of a target task to be operated, taking the priority as a first priority, and acquiring a CPU set with the residual computing capacity meeting the target task from a plurality of Central Processing Units (CPUs) in a system; acquiring the highest priority of tasks on each CPU in the CPU set, and taking the highest priority as a second priority; and determining a target CPU in the CPU set based on the first priority and the second priority of the tasks on the CPUs, and distributing the target tasks to the target CPU to run according to a scheduling algorithm. Therefore, by adopting the embodiment of the application, the fairness of the whole system on vruntime can be ensured, and the system performance is improved.

Description

Task allocation method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a task allocation method and apparatus, a storage medium, and an electronic device.
Background
Different tasks may be set with different priorities, and each task preempts resources according to the priority sequence when running on the CPU. By adopting a Complete Fair Scheduling (CFS) algorithm of a linux scheduler, when tasks with different priorities perform resource preemption on a CPU, the virtual running time (vruntime) of the tasks is ensured to be Completely fair.
But with the advent of multiprocessing systems such as symmetric Multi-Processing (SMP) processors and Heterogeneous Multi-Processing (HMP) processors, the completely fair scheduling of CFSs becomes unfair. This is because, in the SMP architecture or the HMP architecture, the CFS algorithm tends to maintain fairness between the vruntime running on a single CPU, but cannot guarantee vruntime between the tasks on the respective CPUs, and for the entire system, even if the CFS algorithm is adopted, the system is more biased to maintain load balance on the respective CPUs, so as to maximize throughput and minimize power consumption of the entire system.
The unfairness causes the task with high priority to acquire less computing resources than the task with low priority, so that the resources of some tasks with high priority are not met under certain conditions, and the system performance problem is caused.
Disclosure of Invention
The embodiment of the application provides a task allocation method, a task allocation device, a storage medium and electronic equipment, which can ensure fairness of a whole system on vruntime and improve system performance. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a task allocation method, where the method includes:
acquiring the priority of a target task to be operated, taking the priority as a first priority, and acquiring a CPU set with the residual computing capacity meeting the target task from a plurality of Central Processing Units (CPUs) in a system;
acquiring the highest priority of tasks on each CPU in the CPU set, and taking the highest priority as a second priority;
and determining a target CPU in the CPU set based on the first priority and the second priority of the tasks on the CPUs, and distributing the target tasks to the target CPU to run according to a scheduling algorithm.
In a second aspect, an embodiment of the present application provides a task allocation apparatus, where the apparatus includes:
the system comprises a first priority acquisition module, a second priority acquisition module and a third priority acquisition module, wherein the first priority acquisition module is used for acquiring the priority of a target task to be operated, taking the priority as a first priority, and acquiring a CPU set with the residual computing capacity meeting the target task from a plurality of Central Processing Units (CPUs) in the system;
a second priority obtaining module, configured to obtain a highest priority of a task on each CPU in the CPU set, and use the highest priority as a second priority;
and the task allocation module is used for determining a target CPU in the CPU set based on the first priority and the second priority and allocating the target task to the target CPU to run according to a scheduling algorithm.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides an electronic device, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
in the embodiment of the application, the priority of the target task to be operated is obtained, the CPUs with the residual computing power meeting the target task are obtained from a plurality of CPUs contained in the system, then the highest priority of the tasks on each CPU meeting the requirements is obtained, the proper CPU is found according to the priority of the target task and the highest priority of the tasks on each CPU, and finally the target task is distributed to the CPU to be operated by adopting a scheduling algorithm. By combining the priorities of the tasks and the consideration of the priorities of the tasks of different CPUs on the whole system, a scheduling algorithm (such as a CFS algorithm) is adopted to select (schedule) a proper CPU for the tasks, so that the tasks preempt resources according to the priorities, the fairness of vruntime among the tasks running on a single CPU is ensured, the fairness of the whole system on the vruntime can be ensured, and the system performance is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a task allocation method according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a multitasking system provided by an embodiment of the present application;
fig. 3 is a schematic flowchart of a task allocation method according to an embodiment of the present application;
FIG. 4a is a diagram illustrating an example of a singly linked list according to an embodiment of the present disclosure;
FIG. 4b is a diagram illustrating an example doubly linked list according to an embodiment of the present disclosure;
FIG. 4c is a diagram illustrating an example circular linked list according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a task allocation apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The task allocation method provided by the embodiment of the present application will be described in detail below with reference to fig. 1 to 4 c. The method may be implemented in dependence on a computer program, operable on a load computing device based on the von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application. The task allocation device in the embodiment of the present application may be a user terminal, where the user terminal includes but is not limited to: a smartphone, personal computer, tablet, handheld device, in-vehicle device, wearable device, computing device, or other processing device connected to a wireless modem, and the like.
Please refer to fig. 1, which is a flowchart illustrating a task allocation method according to an embodiment of the present disclosure. As shown in fig. 1, the method of the embodiment of the present application may include the steps of:
s101, acquiring the priority of a target task to be operated, taking the priority as a first priority, and acquiring a CPU set with the residual computing capacity meeting the target task from a plurality of Central Processing Units (CPUs) in a system;
a task can be viewed as a thread. At least one thread may be involved in a process in general. The thread can utilize the resources owned by the process, and in an operating system introducing the thread, the process is generally used as a basic unit for allocating the resources, and the thread is used as a basic unit for independently running and independently scheduling. Because the thread is smaller than the process and basically does not possess system resources, the overhead for scheduling the thread is much smaller, and the concurrent execution degree among a plurality of programs of the system can be more efficiently improved.
Generally, a process has the following three basic states: ready (Ready) state, executing (Running) state, and Blocked (Blocked) state.
When a process has a running condition (has been allocated to all necessary resources except the CPU), the process can be executed as soon as the CPU resource is acquired, and the process state at this time is referred to as a ready state.
When a process has acquired a CPU and its program is executing on the CPU, the state of the process at this time is referred to as the execution state.
The executing process is in a blocked state by abandoning the CPU when it cannot be executed by waiting for an event. Events that cause process blocking may be various, such as waiting for an I/O to complete, applying for a buffer not to be satisfied, waiting for a semaphore, etc.
For each task, the running state is the same as that of the process to which the task belongs. In this embodiment of the application, an operation state corresponding to the target task to be executed is a Ready (Ready) state. The target task may be a newly created task or a awakened task.
Each task has a corresponding priority that determines when it runs and how much CPU resources it receives. The priority of the Windows system is 32 levels, is a value from 0 to 31, and is called a basic priority level. The system schedules the operation of tasks according to different priorities, wherein the 0-15 priorities are common priorities, the priorities of the tasks can be dynamically changed, the high-priority tasks are preferentially operated, only the high-priority tasks are not operated, the low-priority tasks are scheduled to be operated, and the tasks with the same priorities are operated according to a time slice round-robin flow. The 16-31 level is a real-time priority level, and the real-time priority level is the biggest difference from the common priority level in that the running of the tasks with the same priority level does not rotate according to time slices, but the task running first controls the CPU first, and if the task does not actively give up control, the task with the same level or low priority level cannot run. The priority of the Linux system is from 0-140, 0-99 represents real-time tasks, 100-140 represents non-real-time tasks, and in contrast to Windows, the smaller the Linux priority value, the higher the level, the task is preferentially scheduled by the kernel.
In the embodiment of the present application, the present application is not limited to a Windows system or a Linux system. The first priority of the target task is represented by a priority value, and may be specifically a task level, such as 100.
In addition, the embodiment of the application is applied to a multi-Central Processing Unit (CPU) system.
The CPU is an arithmetic core and a control core of a computer, and is a final execution unit for information processing and program operation. The CPU includes an arithmetic logic unit, a register unit, a control unit, and the like, and has functions of processing instructions, performing operations, controlling time, processing data, and the like.
The performance of a CPU is mainly reflected in the speed at which it runs programs. The performance indexes affecting the running speed comprise parameters such as the working frequency of a CPU, the Cache capacity, an instruction system, a logic structure and the like. The CPU load is the CPU utilization rate, when the CPU utilization rate is high, the running speed of the CPU is low, and the use of the CPU can be limited through CPU frequency modulation.
multi-CPU Systems typically include four different forms, namely Multiprocessor Systems, Multicomputer Systems, Network Systems and Distributed Systems.
The multiprocessor system comprises two or more processors with similar functions, the processors can exchange data with each other, all the processors share a memory, I/O (input/output) equipment, a controller and external equipment, the whole hardware system is controlled by a unified operating system, and all levels of operation, tasks, programs, arrays and elements are comprehensively parallel between the processors and the programs.
For example, as shown in fig. 2, a schematic structural diagram of a multi-CPU system is shown, where the system includes multiple CPUs sharing a memory.
It should be noted that each CPU has different computing power, and even in the case where the computing power of each CPU is the same, since each CPU runs different tasks at different times, there is a difference in the remaining computing power. The residual computing power refers to the computing power of the residual CPU resources except the CPU resources occupied by the tasks running on the CPU at a certain moment.
And determining a CPU set with the residual computing capacity meeting the target task from the attributes of the target task and the residual computing capacity of each CPU contained in the system at the ready time by taking the ready time of the target task as a criterion. The set of CPUs may include at least one.
Of course, the target task may include at least one. When the target task includes a plurality of tasks, the plurality of tasks may be added to the stack or the queue and processed in sequence.
It should be noted that the system may be an SMP system or an HMP system.
The difference between SMP and HMP systems is that the multiple CPUs of the SMP system perform identically, while the multiple CPUs of the HMP system do not.
S102, acquiring the highest priority of tasks on each CPU in the CPU set, and taking the highest priority as a second priority;
a plurality of tasks can be simultaneously operated on one CPU, each task has different priorities, and each task preempts CPU resources according to the priority sequence. For the Linux system, the higher the priority value, the lower the priority.
It should be noted that each task on the CPU adopts CFS scheduling. The CFS scheduler does not use strict rules to allocate a time slice of a certain length to a priority, but allocates a certain proportion of CPU processing time to each task to ensure fairness of virtual time.
Priority refers to the priority level assigned to a task by the computer operating system. It determines the priority of tasks in using resources. The task scheduling priority mainly refers to the priority of the task when the task is scheduled to run, and is mainly related to the priority of the task and a scheduling algorithm.
The priority of the tasks on each CPU is different, but the highest priority. One possible implementation manner is that, in order to record the priority distribution status of the tasks on each CPU, a plurality of linked lists are required to be established, where the number of linked lists is (pri2-pri1+1), and each linked list records all the tasks of a certain priority. The highest priority is found by reading the priority of each linked list contained on each CPU.
S103, determining a target CPU in the CPU set based on the first priority and the second priority of the tasks on each CPU, and distributing the target tasks to the target CPU to run according to a scheduling algorithm.
In a specific implementation, when the second priority of the task on each CPU is smaller than the first priority, indicating that the priority of the target task is the highest, the target CPU with the largest remaining computing power needs to be determined in the CPU set.
And when the first priority is positioned between the second priorities of the tasks on the CPUs, indicating that the priority of the target task is relatively higher, and determining the target CPU with the minimum second priority in the CPU set.
When the second priority of the tasks on the CPUs is larger than the first priority, indicating that the priority of the target task is general, acquiring a third priority of the tasks on the CPUs in the CPU set, wherein the third priority is a second priority of the second priority, then sequentially comparing according to the above manner, and if the lowest priority of the tasks on the CPUs is finally determined to be larger than the first priority, indicating that the priority of the target task is lower, determining the target CPU with the largest residual computing capacity in the CPU set.
After the target CPU is determined, the target task is distributed to the target CPU by adopting a scheduling algorithm, so that the target task can preempt relatively proper CPU resources and run without influencing other tasks with higher priority. The scheduling algorithm performs scheduling based on the virtual runtime, including but not limited to CFS algorithm.
The CFS scheduler does not directly assign priorities. But rather passes the variable vruntime of each task to maintain the virtual runtime and thus record how long each task runs. The virtual run time is related to a decay factor based on task priority, with lower priority tasks having a higher decay rate than higher priority tasks. For normal priority tasks, the virtual runtime is the same as the actual physical runtime.
Thus, if a default priority task runs for 200ms, its virtual run time is also 200 ms. However, if a lower priority task runs for 200ms, its virtual run time will be greater than 200 ms. Similarly, if a higher priority task runs for 200ms, its virtual run time will be less than 200 ms. When deciding which task to run next, the scheduler need only select the task with the smallest virtual run time.
In the embodiment of the application, the priority of the target task to be operated is obtained, the CPUs with the residual computing power meeting the target task are obtained from a plurality of CPUs contained in the system, then the highest priority of the tasks on each CPU meeting the requirements is obtained, the proper CPU is found according to the priority of the target task and the highest priority of the tasks on each CPU, and finally the target task is distributed to the CPU to be operated by adopting a scheduling algorithm. By combining the priorities of the tasks and the consideration of the priorities of the tasks of different CPUs on the whole system, a scheduling algorithm (such as a CFS algorithm) is adopted to select (schedule) a proper CPU for the tasks, so that the tasks preempt resources according to the priorities, the fairness of vruntime among the tasks running on a single CPU is ensured, the fairness of the whole system on the vruntime can be ensured, and the system performance is improved.
Please refer to fig. 3, which is a flowchart illustrating a task allocation method according to an embodiment of the present disclosure. The difference between the embodiment shown in fig. 3 and the embodiment shown in fig. 1 is that fig. 1 does not limit the system type and the system architecture, fig. 3 is a detailed description of the present solution in conjunction with a specific example in a Linux system, and the task allocation method may include the following steps:
s201, acquiring the priority of a target task to be operated, taking the priority as a first priority, and judging whether the first priority is greater than a first threshold and smaller than a second threshold, wherein the first threshold is smaller than or equal to the second threshold;
and the running state corresponding to the target task to be run is a Ready (Ready) state. The target task may be a newly created task or a awakened task.
The first priority of the target task is represented by a priority value, and may be specifically a task level. For example, the priority of the Linux system is from 0 to 140, where 0 to 99 represents real-time tasks, 100-. For example, the first priority may be 120.
The first threshold pri1 is 100, and the second threshold pri2 is 140. At this time, the first priority is greater than pri1 and less than pri 2.
Optionally, if the first priority is 80, which is smaller than pri1, indicating that the target task is a real-time task, a load tracking strategy may be used to keep the system load balanced, thereby maximizing throughput and minimizing power consumption of the entire system.
It should be noted that the target task is scheduled based on the CFS algorithm.
S202, if yes, obtaining a CPU set with the residual computing power meeting the target task from a plurality of Central Processing Units (CPUs) in the system;
the embodiment of the application is applied to a multi-CPU system, such as an SMP system or an HMP system. The difference between SMP and HMP systems is that the multiple CPUs of the SMP system perform identically, while the multiple CPUs of the HMP system do not.
Regardless of whether the initial computing power of each CPU is the same, since different tasks are run on each CPU at different times, the CPU resources occupied by each task are different, and thus the remaining computing power of each CPU is different. The residual computing power refers to the computing power of the residual CPU resources except the CPU resources occupied by the tasks running on the CPU at a certain moment.
And determining a CPU set with the residual computing capacity meeting the target task from the attributes of the target task and the residual computing capacity of each CPU contained in the system at the ready time by taking the ready time of the target task as a criterion. The set of CPUs may include at least one.
For example, the system includes A, B, C, D with 4 CPUs, different tasks are running on A, B, C, D at the time of creating target task E, and the CPUs with the remaining computing power satisfying E include A and D.
S203, acquiring at least one linked list corresponding to each CPU in the CPU set, wherein each linked list records a task set with the same priority;
a linked list is a common basic data structure, and is a linear table, but the data is not stored in a linear order, but a pointer to the next node is stored in each node. The linked list may be inserted with O (1) complexity since it is not necessarily stored in order, but O (n) time is required to find a node or access a node with a specific number.
The linked list structure can overcome the defect that the data size of the array linked list needs to be known in advance, and the linked list structure can fully utilize the memory space of a computer and realize flexible dynamic memory management.
The linked list structure includes a single linked list, a double linked list and a circular linked list.
A singly linked list includes two fields, an information field and a pointer field. This link points to the next node in the table, while the last node points to a NULL value NULL. The singly linked list may be traversed in only one direction. When searching a node, the next node is accessed from the first node each time until the required position is reached. The location of a node may also be stored in advance and then accessed directly. As shown in fig. 4 a.
The doubly linked list contains not only the pointer to the next node but also the pointer to the previous node. The first node's "front connection" points to NULL and the last node's "back connection" points to NULL. This allows access to the previous node from any one node, or to the next node, or even to the entire linked list. Typically when a large amount of additional data is needed to store the location of the data in the linked list. As shown in fig. 4 b.
In a circular linked list, the head node and the tail node are linked together. This approach can be implemented in both single and double linked lists. To convert a circular linked list, one can start at any one node and then follow either direction of the list until the starting node is returned. The first node in the circular linked list is preceded by the last node and vice versa. The unbounded circular linked list makes it easier to design algorithms on such linked lists than normal linked lists. Whether the newly added node is before the first node or after the last node can be flexibly processed according to actual requirements, and the difference is not great. As shown in fig. 4 c.
Each CPU corresponds to at least one linked list, each linked list records at least one task with the same priority (i.e., each linked list on each CPU corresponds to one priority), and since the tasks with different priorities currently running are different, the length of each linked list is also different. It will be appreciated that the system contains a linked list of total number (pri2-pri1+1), where one value represents a priority.
S204, traversing at least one linked list corresponding to each CPU, reading the highest priority corresponding to each CPU, and taking the highest priority as a second priority;
for example, the priorities of the storage linked lists corresponding to the currently running tasks on a are 102, 105, 110, 115 and 120 respectively; the priorities of the storage linked lists corresponding to the tasks currently running on the D are respectively 100, 105, 108, 120 and 128. Then, the highest priority pri (a) at a is 120, and the highest priority pri (D) at D is 128.
S205, when the second priority of the task on each CPU is smaller than the first priority, determining the CPU with the largest residual computing capacity as the target CPU in the CPU set, or determining the CPU with the smallest computing power consumption as the target CPU in the CPU set.
For the SMP system, when the second priority of the task on each CPU is smaller than the first priority, it indicates that the priority pri (E) of the target task E is higher than the priorities of all the tasks on a and D, and at this time, a target CPU with larger residual computing power needs to be selected from a and D.
For the HMP system, when the second priority of the task on each CPU is smaller than the first priority, a target CPU with the minimum computing power consumption is determined in the CPU set.
Here, the power consumption is an index of all electrical devices, and refers to the amount of energy consumed in a unit time, and the unit is W. The calculation power consumption is the energy consumed by the CPU to run the target task. It will be appreciated that each CPU has different performance and that the energy consumed to compute the same task is different. Therefore, the CPU with the lowest computational power consumption is also selected to reduce the overall power consumption of the system.
S206, when the first priority is located between the second priorities of the tasks on the CPUs, the CPU with the minimum second priority is determined as the target CPU in the CPU set.
For example, pri (a) > pri (E) > pri (d) indicates that the priority of task E is relatively high, if E is allocated to a, a task with higher priority on a is to be run or is running, and the task with higher priority can wait until E obtains resources only after the running of the task with high priority is completed; if E is allocated to D, E has the highest priority on D, and resources can be preferentially obtained on D. Therefore, E is preferentially assigned to D.
Of course, if the remaining computing power of the lower priority cpu (B) also satisfies E, then E is preferentially allocated to B.
S207, when the second priority of the tasks on the CPUs is larger than the first priority, obtaining the second priority of the highest priority of the tasks on the CPUs in the CPU set, and taking the second priority as a third priority;
if Pri (E) is smaller than both Pri (A) and Pri (D), indicating that the priority of target task E is not high enough, the sub-priorities Pri (A ') and Pri (D') on CPU (A) and CPU (D) are further obtained. The sizes of Pri (E) and Pri (A ') and Pri (D') are then compared.
S208, taking the third priority as the second priority, and executing the step of determining the CPU with the largest residual computing capacity as the target CPU in the CPU set or determining the CPU with the smallest computing power consumption as the target CPU in the CPU set when the second priorities of the tasks on the CPUs are all smaller than the first priorities;
if Pri (E) > Pri (A ') and Pri (E) > Pri (D'), then E is allocated to the CPU with the larger computing power or the smaller computing power consumption left in A and D.
E is assigned to D if Pri (a ') > Pri (E) > Pri (D'), or to a if Pri (D ') > Pri (E) > Pri (a').
If Pri (e) is smaller than both Pri (a ') and Pri (D'), then the tasks with lower priority on a and D are further compared in the manner described above.
S209, when determining that the lowest priority of the tasks on the CPUs is larger than the first priority, determining the CPU with the largest residual computing capacity as the target CPU in the CPU set, or determining the CPU with the smallest computing power consumption as the target CPU in the CPU set.
After the above loop comparison, if it is finally determined that the priority of E is lower than the lowest priority of all the tasks on a and D, it indicates that the priority of the task is already low, and it is actually irrelevant to which CPU the task is allocated, and of course, E is preferentially allocated to the target CPU with the largest residual computing capability/the smallest computing power consumption for better processing each task.
S210, acquiring the second priority for executing the tasks based on the first priority and the CPUs, determining a target CPU in the CPU set, and distributing the target task to the cycle number of the step of running on the target CPU according to a scheduling algorithm;
of course, if the number of tasks per CPU is large, the number of loop repetitions may be limited to a threshold, such as 10, when the priority of the target task is low, in order to save time and improve allocation efficiency.
S211, when the circulation times reach the designated times, determining any CPU in the CPU set as a target CPU.
Then when the number of loops reaches the threshold, indicating that the priority of the task is low, if it is still not determined to which CPU to allocate the task, then any of the CPUs that can satisfy the task at the remaining computing power may suffice.
Optionally, after all the tasks to be run are allocated as described above, in order to keep the load balance on each CPU, the tasks on each CPU need to be adaptively allocated.
For example, a task a includes a plurality of higher priority tasks, and their corresponding priorities are 130, 135, 138 and 140, respectively, and the highest priority of the task B is 128, after task E is allocated, the high priority task a needs to be adaptively allocated to B to maintain the load balance on each CPU, thereby maximizing throughput and minimizing power consumption of the whole system.
And S212, distributing the target task to the target CPU to run according to a scheduling algorithm.
The scheduling algorithm performs scheduling based on the virtual runtime, including but not limited to CFS algorithm.
In the embodiment of the application, the priority of the target task to be operated is obtained, the CPUs with the residual computing power meeting the target task are obtained from a plurality of CPUs contained in the system, then the highest priority of the tasks on each CPU meeting the requirements is obtained, the proper CPU is found according to the priority of the target task and the highest priority of the tasks on each CPU, and finally the target task is distributed to the CPU to be operated by adopting a scheduling algorithm. By combining the priorities of the tasks and the consideration of the priorities of the tasks of different CPUs on the whole system, a scheduling algorithm (such as a CFS algorithm) is adopted to select (schedule) a proper CPU for the tasks, so that the tasks preempt resources according to the priorities, the fairness of vruntime among the tasks running on a single CPU is ensured, the fairness of the whole system on the vruntime can be ensured, and the system performance is improved. In addition, by limiting the cycle number, the task allocation time can be saved, and the allocation efficiency is improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 5, a schematic structural diagram of a task allocation apparatus according to an exemplary embodiment of the present application is shown. The task assigning means may be implemented as all or part of the user terminal in software, hardware or a combination of both. The apparatus 1 includes a first priority acquisition module 10, a second priority acquisition module 20, and a task allocation module 30.
A first priority obtaining module 10, configured to obtain a priority of a target task to be executed, where the priority is used as a first priority, and a CPU set whose remaining computing capacity satisfies the target task is obtained from a plurality of Central Processing Units (CPUs) included in a system;
a second priority obtaining module 20, configured to obtain a highest priority of a task on each CPU in the CPU set, where the highest priority is used as a second priority;
and the task allocation module 30 is configured to determine a target CPU in the CPU set based on the first priority and the second priority of the task on each CPU, and allocate the target task to the target CPU according to a scheduling algorithm for running.
Optionally, the task allocation module 30 is specifically configured to:
and when the second priority of the task on each CPU is smaller than the first priority, determining the CPU with the largest residual computing capacity in the CPU set as the target CPU, or determining the CPU with the smallest computing power consumption in the CPU set as the target CPU.
Optionally, the task allocation module 30 is specifically configured to:
and when the first priority is located between second priorities of the tasks on the CPUs, determining the CPU with the minimum second priority as the target CPU in the CPU set.
Optionally, the task allocation module 30 is specifically configured to:
when the second priority of the tasks on the CPUs is larger than the first priority, acquiring the second priority of the highest priority of the tasks on the CPUs in the CPU set, and taking the second priority as a third priority;
taking the third priority as the second priority, determining a target CPU in the CPU set based on the first priority and the second priorities of the tasks on the CPUs, and distributing the target task to the target CPU to run according to a scheduling algorithm;
and when the lowest priority of the tasks on the CPUs is determined to be larger than the first priority, determining the target CPU with the largest residual computing capacity in the CPU set.
Optionally, the task allocation module 30 is further configured to:
acquiring the second priority for executing the tasks based on the first priority and the CPUs, determining a target CPU in the CPU set, and distributing the target task to the target CPU according to a scheduling algorithm;
and when the cycle number reaches a specified number, determining any CPU in the CPU set as a target CPU.
Optionally, the second priority obtaining module 20 is specifically configured to:
acquiring at least one linked list corresponding to each CPU in the CPU set, wherein each linked list records a task set with the same priority;
and traversing at least one linked list corresponding to each CPU, and reading a second priority corresponding to each CPU.
Optionally, the first priority obtaining module 10 is specifically configured to:
acquiring the priority of a target task to be operated, and taking the priority as a first priority;
judging whether the first priority is greater than a first threshold and smaller than a second threshold, wherein the first threshold is smaller than or equal to the second threshold;
and if so, acquiring a CPU set with the residual computing capacity meeting the target task from a plurality of Central Processing Units (CPUs) in the system.
It should be noted that, when the task allocation apparatus provided in the foregoing embodiment executes the task allocation method, only the division of the functional modules is illustrated, and in practical applications, the function allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the task allocation apparatus and the task allocation method provided in the above embodiments belong to the same concept, and details of implementation processes thereof are referred to in the method embodiments and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiment of the application, the priority of the target task to be operated is obtained, the CPUs with the residual computing power meeting the target task are obtained from a plurality of CPUs contained in the system, then the highest priority of the tasks on each CPU meeting the requirements is obtained, the proper CPU is found according to the priority of the target task and the highest priority of the tasks on each CPU, and finally the target task is distributed to the CPU to be operated by adopting a scheduling algorithm. By combining the priorities of the tasks and the consideration of the priorities of the tasks of different CPUs on the whole system, a scheduling algorithm (such as a CFS algorithm) is adopted to select (schedule) a proper CPU for the tasks, so that the tasks preempt resources according to the priorities, the fairness of vruntime among the tasks running on a single CPU is ensured, the fairness of the whole system on the vruntime can be ensured, and the system performance is improved. In addition, by limiting the cycle number, the task allocation time can be saved, and the allocation efficiency is improved.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the method steps in the embodiments shown in fig. 1 to 4c, and a specific execution process may refer to specific descriptions of the embodiments shown in fig. 1 to 4c, which is not described herein again.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 6, the electronic device 1000 may include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
Wherein a communication bus 1002 is used to enable connective communication between these components.
The user interface 1003 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001 interfaces various components throughout the electronic device 1000 using various interfaces and lines to perform various functions of the electronic device 1000 and to process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and invoking data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1001, but may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 6, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a task allocation application.
In the electronic device 1000 shown in fig. 6, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and the processor 1001 may be configured to invoke the task allocation application stored in the memory 1005 and specifically perform the following operations:
acquiring the priority of a target task to be operated, taking the priority as a first priority, and acquiring a CPU set with the residual computing capacity meeting the target task from a plurality of Central Processing Units (CPUs) in a system;
acquiring the highest priority of tasks on each CPU in the CPU set, and taking the highest priority as a second priority;
and determining a target CPU in the CPU set based on the first priority and the second priority of the tasks on the CPUs, and distributing the target tasks to the target CPU to run according to a scheduling algorithm.
In one embodiment, when the processor 1001 determines a target CPU in the CPU set based on the first priority and the second priority of the task on each CPU, the following operations are specifically performed:
and when the second priority of the tasks on the CPUs is smaller than the first priority, determining the CPU with the largest residual computing capacity/the smallest computing power consumption in the CPU set as the target CPU.
In one embodiment, when the processor 1001 determines a target CPU in the CPU set based on the first priority and the second priority of the task on each CPU, the following operations are specifically performed:
and when the first priority is located between second priorities of the tasks on the CPUs, determining the CPU with the minimum second priority as the target CPU in the CPU set.
In one embodiment, when the processor 1001 determines a target CPU in the CPU set based on the first priority and the second priority of the task on each CPU, the following operations are specifically performed:
when the second priority of the tasks on the CPUs is larger than the first priority, acquiring the second priority of the highest priority of the tasks on the CPUs in the CPU set, and taking the second priority as a third priority;
taking the third priority as the second priority, executing the step of determining a target CPU in the CPU set based on the first priority and the second priorities of the tasks on the CPUs, and distributing the target task to the target CPU to run according to a scheduling algorithm;
and when the lowest priority of the tasks on the CPUs is determined to be greater than the first priority, determining the CPU with the largest residual computing capacity/the smallest computing power consumption as the target CPU in the CPU set.
In one embodiment, after the step of determining a target CPU in the CPU set based on the third priority as the second priority and the second priorities of the tasks on the CPUs, and allocating the target task to the target CPU to run according to a scheduling algorithm, the processor 1001 further performs the following operations:
acquiring the second priority for executing the tasks based on the first priority and the CPUs, determining a target CPU in the CPU set, and distributing the target task to the target CPU according to a scheduling algorithm;
and when the cycle number reaches a specified number, determining any CPU in the CPU set as a target CPU.
In an embodiment, when the processor 1001 acquires the highest priority of the task on each CPU in the CPU set, the following operations are specifically performed:
acquiring at least one linked list corresponding to each CPU in the CPU set, wherein each linked list records a task set with the same priority;
and traversing at least one linked list corresponding to each CPU, and reading the highest priority corresponding to each CPU.
In an embodiment, when the processor 1001 acquires the priority of the target task to be executed, takes the priority as a first priority, and acquires, from a plurality of central processing units CPU included in the system, a CPU set whose remaining computing capacity satisfies the target task, the following operations are specifically performed:
acquiring the priority of a target task to be operated, and taking the priority as a first priority;
judging whether the first priority is greater than a first threshold and smaller than a second threshold, wherein the first threshold is smaller than or equal to the second threshold;
and if so, acquiring a CPU set with the residual computing capacity meeting the target task from a plurality of Central Processing Units (CPUs) in the system.
In the embodiment of the application, the priority of the target task to be operated is obtained, the CPUs with the residual computing power meeting the target task are obtained from a plurality of CPUs contained in the system, then the highest priority of the tasks on each CPU meeting the requirements is obtained, the proper CPU is found according to the priority of the target task and the highest priority of the tasks on each CPU, and finally the target task is distributed to the CPU to be operated by adopting a scheduling algorithm. By combining the priorities of the tasks and the consideration of the priorities of the tasks of different CPUs on the whole system, a scheduling algorithm (such as a CFS algorithm) is adopted to select (schedule) a proper CPU for the tasks, so that the tasks preempt resources according to the priorities, the fairness of vruntime among the tasks running on a single CPU is ensured, the fairness of the whole system on the vruntime can be ensured, and the system performance is improved. In addition, by limiting the cycle number, the task allocation time can be saved, and the allocation efficiency is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. A method of task allocation, the method comprising:
acquiring the priority of a target task to be operated, taking the priority as a first priority, and acquiring a CPU set with the residual computing capacity meeting the target task from a plurality of Central Processing Units (CPUs) in a system;
acquiring the highest priority of tasks on each CPU in the CPU set, and taking the highest priority as a second priority;
and determining a target CPU in the CPU set based on the first priority and the second priority of the tasks on the CPUs, and distributing the target tasks to the target CPU to run according to a scheduling algorithm.
2. The method of claim 1, wherein determining a target CPU in the set of CPUs based on the first priority and the second priority of the task on each CPU comprises:
and when the second priority of the tasks on the CPUs is smaller than the first priority, determining the CPU with the largest residual computing capacity as the target CPU in the CPU set, or determining the CPU with the smallest computing power consumption as the target CPU in the CPU set.
3. The method of claim 1, wherein determining a target CPU in the set of CPUs based on the first priority and the second priority of the task on each CPU comprises:
and when the first priority is located between second priorities of the tasks on the CPUs, determining the CPU with the minimum second priority as the target CPU in the CPU set.
4. The method of claim 1, wherein determining a target CPU in the set of CPUs based on the first priority and the second priority of the task on each CPU comprises:
when the second priority of the tasks on the CPUs is larger than the first priority, acquiring the second priority of the highest priority of the tasks on the CPUs in the CPU set, and taking the second priority as a third priority;
taking the third priority as the second priority, executing the step of determining a target CPU in the CPU set based on the first priority and the second priorities of the tasks on the CPUs, and distributing the target task to the target CPU to run according to a scheduling algorithm;
and when the lowest priority of the tasks on the CPUs is determined to be larger than the first priority, determining the CPU with the largest residual computing capacity in the CPU set as the target CPU, or determining the CPU with the smallest computing power consumption in the CPU set as the target CPU.
5. The method according to claim 4, wherein the step of determining a target CPU in the CPU set based on the first priority and the second priorities of the tasks on the CPUs, and allocating the target task to run on the target CPU according to a scheduling algorithm, further comprises:
acquiring the second priority for executing the tasks based on the first priority and the CPUs, determining a target CPU in the CPU set, and distributing the target task to the target CPU according to a scheduling algorithm;
and when the cycle number reaches a specified number, determining any CPU in the CPU set as a target CPU.
6. The method of claim 1, wherein the obtaining a highest priority of the task on each CPU in the CPU set comprises:
acquiring at least one linked list corresponding to each CPU in the CPU set, wherein each linked list records a task set with the same priority;
and traversing at least one linked list corresponding to each CPU, and reading the highest priority corresponding to each CPU.
7. The method according to claim 1, wherein the obtaining the priority of the target task to be executed, taking the priority as a first priority, and obtaining, among a plurality of Central Processing Units (CPUs) included in a system, a set of CPUs whose remaining computing capacities satisfy the target task, comprises:
acquiring the priority of a target task to be operated, and taking the priority as a first priority;
judging whether the first priority is greater than a first threshold and smaller than a second threshold, wherein the first threshold is smaller than or equal to the second threshold;
and if so, acquiring a CPU set with the residual computing capacity meeting the target task from a plurality of CPUs included in the system.
8. A task assigning apparatus, characterized in that the apparatus comprises:
the system comprises a first priority acquisition module, a second priority acquisition module and a third priority acquisition module, wherein the first priority acquisition module is used for acquiring the priority of a target task to be operated, taking the priority as a first priority, and acquiring a CPU set with the residual computing capacity meeting the target task from a plurality of Central Processing Units (CPUs) in the system;
a second priority obtaining module, configured to obtain a highest priority of a task on each CPU in the CPU set, and use the highest priority as a second priority;
and the task allocation module is used for determining a target CPU in the CPU set based on the first priority and the second priority and allocating the target task to the target CPU to run according to a scheduling algorithm.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the method steps according to any one of claims 1 to 7.
10. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 7.
CN201911244499.XA 2019-12-06 2019-12-06 Task allocation method and device, storage medium and electronic equipment Pending CN112925616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911244499.XA CN112925616A (en) 2019-12-06 2019-12-06 Task allocation method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911244499.XA CN112925616A (en) 2019-12-06 2019-12-06 Task allocation method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112925616A true CN112925616A (en) 2021-06-08

Family

ID=76161898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911244499.XA Pending CN112925616A (en) 2019-12-06 2019-12-06 Task allocation method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112925616A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641476A (en) * 2021-08-16 2021-11-12 腾讯科技(深圳)有限公司 Task scheduling method, game engine, equipment and storage medium
CN114995984A (en) * 2022-07-19 2022-09-02 深圳市乐易网络股份有限公司 Distributed super-concurrent cloud computing system
CN116991246A (en) * 2023-09-27 2023-11-03 之江实验室 Algorithm scheduling method and device for navigation robot and navigation robot system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060150187A1 (en) * 2005-01-06 2006-07-06 International Business Machines Corporation Decoupling a central processing unit from its tasks
CN101458634A (en) * 2008-01-22 2009-06-17 中兴通讯股份有限公司 Load equilibration scheduling method and device
CN102866920A (en) * 2012-08-02 2013-01-09 杭州海康威视系统技术有限公司 Master-slave structure distributed video processing system and scheduling method thereof
CN104090826A (en) * 2014-06-30 2014-10-08 中国电子科技集团公司第三十二研究所 Task optimization deployment method based on correlation
CN104915256A (en) * 2015-06-05 2015-09-16 惠州Tcl移动通信有限公司 Method and system for realizing real-time scheduling of task
CN108563500A (en) * 2018-05-08 2018-09-21 深圳市零度智控科技有限公司 Method for scheduling task, cloud platform based on cloud platform and computer storage media
CN109308212A (en) * 2017-07-26 2019-02-05 上海华为技术有限公司 A kind of task processing method, task processor and task processing equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060150187A1 (en) * 2005-01-06 2006-07-06 International Business Machines Corporation Decoupling a central processing unit from its tasks
CN101458634A (en) * 2008-01-22 2009-06-17 中兴通讯股份有限公司 Load equilibration scheduling method and device
CN102866920A (en) * 2012-08-02 2013-01-09 杭州海康威视系统技术有限公司 Master-slave structure distributed video processing system and scheduling method thereof
CN104090826A (en) * 2014-06-30 2014-10-08 中国电子科技集团公司第三十二研究所 Task optimization deployment method based on correlation
CN104915256A (en) * 2015-06-05 2015-09-16 惠州Tcl移动通信有限公司 Method and system for realizing real-time scheduling of task
CN109308212A (en) * 2017-07-26 2019-02-05 上海华为技术有限公司 A kind of task processing method, task processor and task processing equipment
CN108563500A (en) * 2018-05-08 2018-09-21 深圳市零度智控科技有限公司 Method for scheduling task, cloud platform based on cloud platform and computer storage media

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641476A (en) * 2021-08-16 2021-11-12 腾讯科技(深圳)有限公司 Task scheduling method, game engine, equipment and storage medium
CN113641476B (en) * 2021-08-16 2023-07-14 腾讯科技(深圳)有限公司 Task scheduling method, game engine, device and storage medium
CN114995984A (en) * 2022-07-19 2022-09-02 深圳市乐易网络股份有限公司 Distributed super-concurrent cloud computing system
CN116991246A (en) * 2023-09-27 2023-11-03 之江实验室 Algorithm scheduling method and device for navigation robot and navigation robot system

Similar Documents

Publication Publication Date Title
US20130212594A1 (en) Method of optimizing performance of hierarchical multi-core processor and multi-core processor system for performing the method
WO2016197716A1 (en) Task scheduling method and device
US8875146B2 (en) Systems and methods for bounding processing times on multiple processing units
US9973512B2 (en) Determining variable wait time in an asynchronous call-back system based on calculated average sub-queue wait time
CN109564528B (en) System and method for computing resource allocation in distributed computing
CN108549574B (en) Thread scheduling management method and device, computer equipment and storage medium
US9947068B2 (en) System and method for GPU scheduling
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
CN112214319B (en) Task scheduling method for sensing computing resources
CN109445565B (en) GPU service quality guarantee method based on monopolization and reservation of kernel of stream multiprocessor
CN111597044A (en) Task scheduling method and device, storage medium and electronic equipment
JP2022539955A (en) Task scheduling method and apparatus
CN114637536A (en) Task processing method, computing coprocessor, chip and computer equipment
CN114461365A (en) Process scheduling processing method, device, equipment and storage medium
US9760969B2 (en) Graphic processing system and method thereof
CN113296957B (en) Method and device for dynamically distributing network bandwidth on chip
CN116795503A (en) Task scheduling method, task scheduling device, graphic processor and electronic equipment
CN112783651B (en) Load balancing scheduling method, medium and device for vGPU of cloud platform
CN116244073A (en) Resource-aware task allocation method for hybrid key partition real-time operating system
CN112114967B (en) GPU resource reservation method based on service priority
CN114661415A (en) Scheduling method and computer system
Pang et al. Efficient CUDA stream management for multi-DNN real-time inference on embedded GPUs
CN116841751B (en) Policy configuration method, device and storage medium for multi-task thread pool
CN117149440B (en) Task scheduling method and device, electronic equipment and storage medium
CN112860395B (en) Multitask scheduling method for GPU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination