CN111026518B - Task scheduling method - Google Patents

Task scheduling method Download PDF

Info

Publication number
CN111026518B
CN111026518B CN201811179192.1A CN201811179192A CN111026518B CN 111026518 B CN111026518 B CN 111026518B CN 201811179192 A CN201811179192 A CN 201811179192A CN 111026518 B CN111026518 B CN 111026518B
Authority
CN
China
Prior art keywords
job
task
weight
scheduled
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811179192.1A
Other languages
Chinese (zh)
Other versions
CN111026518A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Cambricon Information Technology Co Ltd
Original Assignee
Shanghai Cambricon Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Cambricon Information Technology Co Ltd filed Critical Shanghai Cambricon Information Technology Co Ltd
Priority to CN201811179192.1A priority Critical patent/CN111026518B/en
Priority to PCT/CN2019/110273 priority patent/WO2020073938A1/en
Publication of CN111026518A publication Critical patent/CN111026518A/en
Application granted granted Critical
Publication of CN111026518B publication Critical patent/CN111026518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application relates to a task scheduling method, which can be used for scheduling tasks efficiently and reasonably according to configuration information of the tasks and state information of a processor.

Description

Task scheduling method
Technical Field
The present application relates to the field of computer technologies, and in particular, to a task scheduling method.
Background
With the rapid development of computer technology, multiprocessor computer systems such as a Multi-core processor Computing System (Multi-core processor Computing System) and a plurality of processors have appeared. For example, the plurality of processors may include a master processor and a plurality of slave processors, and the master processor may be configured to allocate and schedule tasks to be processed and may also be configured to monitor and control operations of the slave processors. However, when the task amount of the task to be processed is large, such as when large-scale machine learning data needs to be processed, the efficiency of the above method of using the main processor to perform task allocation and scheduling is low, which affects the processing efficiency of the computer system.
Disclosure of Invention
In view of the above, it is necessary to provide a task scheduling method capable of providing high efficiency in view of the above technical problems.
A method of task scheduling, the method comprising:
acquiring decomposition information and all task information of a task and state information of a processor;
according to the decomposition information and all task information of each task and the state information of the processor, respectively matching each job of the task with the processor, and adding the job successfully matched with the processor to a job set to be scheduled;
and selecting target jobs from the job set to be scheduled according to the target weight of each job in the job set to be scheduled to obtain scheduling information, wherein the scheduling information is used for determining the execution sequence of the jobs on a processor.
In one embodiment, the method further comprises:
and if more than one job in the task is not successfully matched with the processor within the preset time, acquiring a scheduling failure signal of the task.
In one embodiment, the step of selecting a target job from the set of jobs to be scheduled according to the target weight of each job in the set of jobs to be scheduled includes:
determining the scheduling priority of each job in the job set to be scheduled according to the target weight of each job;
and according to the scheduling priority of each job, taking the job with the highest scheduling priority in the job set to be scheduled as the target job.
In one embodiment, when the number of the job sets to be scheduled is more than one, each job set to be scheduled stores jobs of the same job category, and the step of selecting a target job from the job sets to be scheduled according to a target weight of each job in the job sets to be scheduled includes:
determining the target weight of each job in each job set to be scheduled according to the expected weight and the current historical weight of a plurality of jobs in each job set to be scheduled;
taking the operation with the maximum target weight in each operation set to be scheduled as a pre-emission operation of a corresponding operation type;
and determining the target operation according to the target weight of each pre-transmitting operation.
In one embodiment, the step of determining the target weight of each job in each set of jobs to be scheduled according to the expected weight and the current historical weight of a plurality of jobs in each set of jobs to be scheduled includes:
correspondingly obtaining the expected weight of each job in each job set to be scheduled according to the configuration weight of each job in each job set to be scheduled and the total configuration weight of a plurality of jobs in each job set to be scheduled;
obtaining the current historical weight corresponding to each job in each job set to be scheduled according to the expected weight of each job in each job set to be scheduled;
and calculating a weight difference value of the expected weight and the current historical weight of each job in each job set to be scheduled, and obtaining the target weight of each job according to the weight difference value.
In one embodiment, the configuration weight of each job in each job set to be scheduled is the configuration weight of the task to which the job belongs, and the expected weight of the job is the expected weight of the task to which the job belongs.
In one embodiment, the step of obtaining the current historical weight corresponding to each job in each set of jobs to be scheduled according to the expected weight of each job in each set of jobs to be scheduled includes:
determining a delay factor corresponding to each job according to the expected weight of each job in each job set to be scheduled;
and obtaining the current historical weight of the job according to the initial historical weight of each job in each job set to be scheduled and the delay factor corresponding to the job.
In one embodiment, the method further comprises:
and if a plurality of jobs of the same task are newly added in a certain job set to be scheduled, or after the plurality of jobs of the same task are all transmitted, updating the expected weight and the initial historical weight of each job in the certain job set to be scheduled.
In one embodiment, the step of determining the target job according to the target weight of each of the pre-emission jobs comprises:
if the target weight of each pre-launching operation is the same, determining the target operation according to the expected weight of each pre-launching operation;
and if the target weight of each pre-transmitting operation is different, taking the pre-transmitting operation with the maximum target weight as the target operation.
In one embodiment, the method further comprises:
and acquiring a processor identifier corresponding to the job successfully matched with the processor, wherein the processor identifier is used for identifying the identity of the processor.
A task scheduling apparatus, the apparatus comprising:
the first read-write circuit is used for acquiring decomposition information and all task information of the tasks and state information of a processor of the processor;
the matching circuit is used for respectively matching each job of the task with the processor according to the decomposition information and all task information of each task and the state information of the processor, and adding the job successfully matched with the processor to a job set to be scheduled; and
the selection circuit is used for selecting a target job from the job set to be scheduled according to the target weight of each job in the job set to be scheduled to obtain scheduling information, and the scheduling information is used for determining the execution sequence of the jobs on a processor;
wherein the selection circuit comprises an operator for:
correspondingly obtaining the expected weight of each job in each job set to be scheduled according to the configuration weight of each job in each job set to be scheduled and the total configuration weight of a plurality of jobs in each job set to be scheduled; the configuration weight of each job is the configuration weight of the task to which the job belongs;
obtaining the current historical weight corresponding to each job in each job set to be scheduled according to the expected weight of each job in each job set to be scheduled; the current historical weight is the configuration weight of each job or the historical weight in the last scheduling process;
and determining the target weight of each job in the job set to be scheduled according to the expected weight and the current historical weight of each job in the job set to be scheduled.
In one embodiment, the matching circuit is further configured to obtain a scheduling failure signal of the task when more than one job in the task is successfully matched with the unmatched job within a preset time.
In one embodiment, the selection circuit further comprises a selector:
the arithmetic unit is used for determining the scheduling priority of each job according to the target weight of each job in the job set to be scheduled; and
and the selector is used for taking the job with the highest scheduling priority in the job set to be scheduled as the target job according to the scheduling priority of each job.
In one embodiment, when the number of the job sets to be scheduled is more than one, each of the job sets to be scheduled stores jobs of the same job category, the selection circuit further includes a selector:
the arithmetic unit is used for taking the job with the maximum target weight in each job set to be scheduled as a pre-emission job of the job type; and
and the selector is used for determining the target operation according to the target weight of each pre-transmitting operation.
In one embodiment, the arithmetic unit includes:
and the third operation unit is used for calculating a weight difference value of the expected weight and the current historical weight of each job in each job set to be scheduled, and acquiring the target weight of each job according to the weight difference value.
In one embodiment, the arithmetic unit further comprises:
the delay subunit is used for determining a delay factor corresponding to each job according to the expected weight of each job; and
and the updating subunit is used for determining the current history weight of the job according to the initial history weight of each job and the delay factor corresponding to the job.
In one embodiment, the updating subunit is further configured to determine a delay factor corresponding to each job according to the expected weight of each job and a preset mapping relationship.
In one embodiment, the update subunit is further configured to:
if all the jobs of a certain task are not selected as target jobs in the current scheduling, taking the ratio of the initial history weight of each job of the certain task to the delay factor as the adjustment factor of each job, and taking the difference value of the initial history weight of each job and the adjustment factor corresponding to the initial history weight as the current history weight of the job;
if the job of a certain task is selected as the target job in the scheduling, the ratio of the initial history weight of each job of the certain task to the delay factor is used as a first adjustment factor of each job, the ratio of the maximum value of the delay factor to the delay factor corresponding to the job is used as a second adjustment factor of each job, and the current history weight of each job is obtained by calculation according to the initial history weight of each job, the first adjustment factor and the second adjustment factor.
In one embodiment, the updating subunit is further configured to update the expected weight and the initial historical weight of each job in the set of jobs to be scheduled corresponding to the job category if a plurality of jobs of the same task are newly added in the set of jobs to be scheduled corresponding to the job category or after a plurality of jobs of the same task are all transmitted.
In one embodiment, the selector is further configured to:
when the target weight of each pre-emission operation is the same, determining the target operation according to the expected weight of each pre-emission operation;
and when the target weight of each pre-transmitting operation is different, taking the pre-transmitting operation with the maximum target weight as the target operation.
In one embodiment, the first read-write circuit is further configured to:
and after the scheduling information is obtained, all task information of the current scheduling task to which the target job belongs is obtained from a task cache device, and the scheduling information and all task information and decomposition information of the current scheduling task of the target job can be transmitted to a second processor.
In one embodiment, the selection circuit is further configured to transmit the scheduling information to the corresponding processor after setting the processor lock signal to a high level according to the scheduling information; setting the corresponding processor lock signal to a low level after completion of the transmission of the scheduling information.
A task scheduler comprising a task scheduling apparatus as claimed in any one of the preceding claims.
According to the task scheduling method, the processor is matched for each job of the task according to the decomposition information and all task information of the task and the state information of the processor to obtain a job set to be scheduled, the job distributed to the processor can be processed in time after the scheduling is finished, then the target job is selected from the job set to be scheduled according to the target of the job to obtain the scheduling information, and the job with high weight can be ensured to occupy the processor resources, so that the task scheduling method can improve the processing efficiency of the processor.
Drawings
FIG. 1 is a diagram of an application environment of a task scheduler in one embodiment;
FIG. 2 is a diagram of an application environment of a task scheduler in another embodiment;
FIG. 3 is a diagram of an application environment of a task decomposition device in one embodiment;
FIG. 4 is a diagram of another exemplary condition monitoring device;
FIG. 5 is a block diagram of a computing device, according to an embodiment;
FIG. 6 is a block diagram of a computing device provided in accordance with another embodiment;
FIG. 7 is a block diagram of a main processing circuit provided by one embodiment;
FIG. 8 is a block diagram of a computing device provided in one embodiment;
FIG. 9 is a block diagram of another computing device provided in one embodiment;
FIG. 10 is a schematic diagram illustrating a tree module according to an embodiment;
FIG. 11 is a block diagram of a computing device provided in one embodiment;
FIG. 12 is a block diagram of a computing device provided by an embodiment;
FIG. 13 is a block diagram of a computing device provided by an embodiment;
FIG. 14 is a flowchart illustrating the steps of a method for task scheduling according to one embodiment;
FIG. 15 is a flowchart illustrating steps performed in one embodiment to determine a target job;
FIG. 16 is a flowchart showing the steps for determining a target job set forth in another embodiment;
FIG. 17 is a flowchart of the steps set forth in one embodiment for determining a target weight for a job;
FIG. 18 is a flowchart of the steps set forth in one embodiment for determining a current historical weight for a job;
FIG. 19 is a flowchart illustrating steps presented in one embodiment for determining a target job.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, the task scheduling device 100 may include a first read/write circuit 110, a matching circuit 120, and a selection circuit 130, where the first read/write circuit 110, the matching circuit 120, and the selection circuit 130 are electrically connected in sequence, and the selection circuit 130 is connected to the processor. Optionally, the processor may include the first processor 200 and/or the second processor 300. Alternatively, the first processor 200 may be a general-purpose processor such as a CPU, and the second processor 300 may be a coprocessor of the first processor 200. The task scheduling apparatus 100 may process the decomposition information of the tasks and the entire task information to obtain scheduling information for the processor to determine the job to be processed and the processing order of the jobs to be processed by the processor. The scheduling information may include job identification for a plurality of jobs, processor identity information corresponding to each job, and bit order information for the plurality of jobs to be processed by the corresponding processor.
Optionally, the second processor 300 may include a second processor body 310 and a control device 320 for controlling the operation of the second processor body, and the second processor body 310 may be an artificial intelligent processor such as an IPU (intelligent Processing Unit) or an NPU (Neural-network Processing Unit). Of course, in other embodiments, the second processor body 310 may also be a general-purpose processor such as a CPU or a GPU. Optionally, the plurality of second processor bodies are each connected to the control device of the second processor body. Alternatively, the second processor body 310 may include a plurality of core processors, and each core processor is connected to the control device of the second processor body. Specifically, the first read/write circuit 110 is configured to, when receiving a task scheduling request of a task, obtain decomposition information and all task information of the task and state information of the processor according to the task scheduling request of the task. Alternatively, the second read-write control circuit may be an I/O circuit.
The matching circuit 120 is configured to match each job of the task with the processor according to the decomposition information and all task information of each task and the state information of the processor, and add the job successfully matched with the processor to the job set to be scheduled. The job set to be scheduled may include jobs of a plurality of tasks. Further, if more than one job in the task is not successfully matched with the processor within a preset time (such as 128 beats or 256 beats), a scheduling failure signal of the task is acquired.
Specifically, the matching circuit 120 may obtain processor information (such as processor type) required by each job of the task, and obtain processing capability and other information of the processor required by each job according to the size of each job. The processor state information of the processor may include information of the type of the processor, operation state information of the processor (whether the processor is idle), and processing capability of the processor. In this way, the matching circuit 120 can match each job of the task with the processor based on the overall task information and task breakdown information of the task, and the processor state information. Alternatively, the matching circuit 120 may be formed by connecting more than one comparator in parallel, the input data of each comparator may be the decomposition information and all task information of each job, and the processor status information, and the output data of the comparator may be a signal of matching success or matching failure. Further, if the job is successfully matched with the processor, the matching circuit may further obtain information such as a processor identifier of the processor matched with the job, where the processor identifier is used to identify the identity of the processor (e.g., a processor number, etc.).
The selection circuit 130 is configured to select a target job from the set of jobs to be scheduled according to a target weight of each job in the set of jobs to be scheduled, and obtain scheduling information. Specifically, the task scheduling device 100 may send the jobs in the job set to be scheduled to the processor one by one for processing, and the selection circuit 130 determines the target job to be currently scheduled according to the target weight of each job in the job set to be scheduled. The target weight of each job in the job set to be scheduled may be obtained by calculation, and of course, the target weight of each job in the job set to be scheduled may also be preset.
Alternatively, in one embodiment, the selection circuit 130 may include an operator and a selector connected to the operator, the operator may be connected to the matching circuit 120, and the selector may be connected to the processor 300. The arithmetic unit is used for determining the scheduling priority of each job according to the target weight of each job in the job set to be scheduled and the target weight of each job in the job set to be scheduled, namely the arithmetic unit can sequence each job according to the target weight of each job in the job set to be scheduled to obtain the scheduling priority of each job. The selector is used for taking the job with the highest scheduling priority in the job set to be scheduled as the target job according to the scheduling priority of each job and obtaining scheduling information. The job with the highest scheduling priority may be the job with the largest target weight, that is, the target job is the job with the largest target weight in the job set to be scheduled. Therefore, the task with the maximum target weight is scheduled preferentially, so that the target task can preempt the processor resource preferentially, and the task scheduling process can be optimized.
In one embodiment, the number of the job sets to be scheduled is more than one, each job set to be scheduled is used for storing jobs of the same job category, and the job category of each job may be the same as the task category of the task to which the job belongs. In particular, the selection circuit comprises an operator, which may be connected to the matching circuit, and a selector, which may be connected to the second processor. The arithmetic unit is used for determining the target weight of each job in the job set to be scheduled corresponding to each job category according to the expected weight and the current historical weight of a plurality of jobs in the job set to be scheduled corresponding to each job category, and taking the job with the maximum target weight in the job set to be scheduled corresponding to the job category as the pre-emission job of the job category. The selector is used for determining the target operation according to the target weight of each pre-emission operation and obtaining the scheduling information.
Alternatively, the selector may compare the target weights of the respective pre-transmission jobs, and take the pre-transmission job having the largest target weight as the target job. If the target weights of the pre-launch jobs are the same, the selector may determine the target job based on the desired weight of the pre-launch jobs. For example, when the target weights of the respective pre-transmission jobs are the same, the selector may take the pre-transmission job whose desired weight is the largest as the target job.
For example, the task category of the task may be block (blocking task), cluster (clustering task), and union (join task), the job category of the job included in the blocking task is a blocking job, and is abbreviated as job category B; the operation type of the operation contained in the clustering task is clustering operation and is abbreviated as operation type C; the job type of the job included in the join task is a join job, and is abbreviated as a job type U. Wherein,
the job set one to be scheduled corresponding to the job category U may be represented as follows:
Figure 172994DEST_PATH_IMAGE001
the operator may calculate the target weight TU1 of the obtained job according to the desired weight WU1 of the job 1 and the current history weight HU1, and similarly, the operator may calculate the target weights of the obtained jobs 2 to n. Further, the operator may sort the target weights of job 1 to job n, and take the job with the largest target weight among the jobs 1 to job n as a pre-launch job. For example, the pre-launch job of the job set to be scheduled is job 1.
The job set two to be scheduled corresponding to the job category B may be represented as follows:
Figure 833782DEST_PATH_IMAGE002
among them, the operator can calculate the target weight TB1 of the obtained job from the desired weight WB1 and the current history weight HB1 of job 1, and similarly, the operator can calculate the target weights of the obtained jobs 2 to m. Further, the operator may sort the target weights of job 1 to job m, and take the job with the largest target weight among job 1 to job m as a pre-launch job. For example, the pre-launch job of the second job set to be scheduled is job 2.
The job set three to be scheduled corresponding to the job category C may be represented as follows:
Figure 976051DEST_PATH_IMAGE003
among them, the operator may calculate the target weight TC1 of the obtained job from the desired weight WC1 of job 1 and the current history weight HC1, and similarly, the operator may calculate the target weights of the obtained jobs 2 to k. Further, the operator may sort the target weights of job 1 to job k, and take the job with the largest target weight among job 1 to job k as the pre-launch job. For example, the pre-launch job of the second set of jobs to be scheduled is job 3.
Thereafter, the selector may determine a target job from the above-described 3 pre-launch jobs. Specifically, if TU1 is greater than TB2 and TB2 is greater than TC3, then job 1 in the to-be-scheduled job set may be taken as the target job. If TU1, TB2, and TC3 are equal, WU1, WB2, and WC3 can be compared in size. If WU1 is larger than WB2, and WB2 is larger than WC3, then job 1 in the first set of jobs to be scheduled may be the target job.
Optionally, the desired weight of the job is a desired weight of a task to which the job belongs. For example, in a scheduling job set, job 1 and job 2 may belong to the same task, job 3 and job 4 may belong to the same task, and job n may belong to another task. The desired weights of job 1 and job 2 are equal to the desired weight WB1 of the task to which they belong, and the desired weights of job 3 and job 4 are equal to the desired weight WB2 of the task to which they belong. Of course, in other embodiments, the desired weights for jobs in the same task may not be the same.
Further, the operator may include a first operation unit (ALU 1), a second operation unit (ALU 2), and a third operation unit (ALU 3), which are sequentially connected, the first operation unit may be connected to the matching circuit, and the third operation unit may be connected to the selector.
The first operation unit is used for obtaining the expected weight of each job in the job set to be scheduled corresponding to the job category according to the configuration weight of each job in the job set to be scheduled corresponding to the job category and the total configuration weight of a plurality of jobs in the job set to be scheduled corresponding to the job category. The configuration weight of each job may be an initial weight of each job, which is included in basic task information of a task to which the job belongs. The desired weight for the job may be equal to a ratio of the configuration weight for the job to the total configuration weight in the set of jobs to be scheduled.
Optionally, the configuration weight of each job in the job set to be scheduled corresponding to the job category is the configuration weight of the task to which the job belongs, that is, the configuration weights of the jobs in the same task are the same. At this time, the first operation unit only needs to calculate to obtain the expected weight of each job according to the configuration weight of the task to which the job belongs and the total configuration weight of a plurality of tasks in the job set to be scheduled. I.e. the desired weight of the job may be equal to the ratio of the configuration weight of the task to which the job belongs to the total configuration weight of the plurality of tasks in the set of jobs to be scheduled.
Taking the above example as a root, for example, n jobs in the first set of jobs to be scheduled may belong to three tasks, task 1, task 2, and task 3, where the configuration weight of task 1 is denoted as S1, the configuration weight of task 2 is denoted as S2, and the configuration weight of task 3 is denoted as S3, and the desired weight WU1 of the task 1 may be equal to S1/(S1 + S2+ S3). Similarly, the desired weight WU2 for task 2 may be equal to S2/(S1 + S2+ S3). And recording the expected weight of each job in the job set to be scheduled as the expected weight of the task to which the job belongs. The calculation method of the expected weight of each job in the job set two to be scheduled and the job set three to be scheduled is similar to the above method, and is not described herein again.
Specifically, the second operation unit is configured to obtain a current historical weight corresponding to each job in the to-be-scheduled job set corresponding to the job category according to the expected weight of each job in the to-be-scheduled job set corresponding to the job category. Optionally, the second operation unit may obtain a current historical weight corresponding to each job according to an expected weight of each job in each job set to be scheduled and a preset mapping relationship.
Optionally, the second arithmetic unit may further include a delay subunit and an update subunit, the delay subunit is connected to the first arithmetic unit, and the update subunit is connected to the delay subunit and the first arithmetic unit. The delay subunit is used for determining a delay factor corresponding to each job according to the expected weight of each job; and the updating subunit is used for obtaining the current history weight of the job according to the initial history weight of each job and the delay factor corresponding to the job. Wherein, the initial history weight of each job can be the configuration weight of each job or the history weight in the last scheduling process.
Optionally, the delay subunit may determine the delay factor corresponding to each job according to the expected weight of each job and a preset mapping relationship. For example, the preset mapping relationship is shown in the following table:
Figure 267355DEST_PATH_IMAGE004
as can be seen from the above table, the larger the desired weight, the smaller the delay factor. I.e., the greater the expected weight of a job, the higher its scheduling priority may be.
Further, after one-time scheduling is finished, the updating subunit is used for obtaining the current history weight of the job according to the initial history weight of each job and the delay factor corresponding to the job. Alternatively, if none of the jobs of a task is scheduled in the current scheduling, that is, none of the jobs of the task is selected as the target job in the current scheduling, the updating subunit may use a ratio of an initial history weight of each job of the task to the delay factor as an adjustment factor of each job, and use a difference between the initial history weight of each job and its corresponding adjustment factor as a current history weight of the job. If a job of a task is scheduled in the current scheduling, i.e. the target job belongs to the task, the updating subunit may update the historical weights of other jobs of the task. Specifically, the updating subunit may use a ratio of the initial history weight of each job of the task to the delay factor as a first adjustment factor of each job, use a ratio of the maximum value of the delay factor to the delay factor corresponding to the job as a second adjustment factor of each job, and calculate and obtain the current history weight according to the initial history weight of each job, the first adjustment factor, and the second adjustment factor. For example, the current historical weight = initial historical weight-first adjustment factor + second adjustment factor.
Furthermore, the updating subunit is further configured to update the expected weight and the initial historical weight of each job in the set of jobs to be scheduled corresponding to the job category after a plurality of jobs of the same task are newly added in the set of jobs to be scheduled corresponding to the job category or after a plurality of jobs of the same task are all transmitted. When the operation is a newly added operation, the initial history weight of the operation is the configuration weight thereof.
Optionally, the third operation unit is configured to calculate a weight difference between an expected weight and a current historical weight of each job in the job set to be scheduled corresponding to the job category, obtain a target weight of each job according to the weight difference, and use the job with the largest target weight in the job set to be scheduled corresponding to the job category as the pre-launch job of the job category. Specifically, the third operation unit is configured to calculate a weight difference between an expected weight and a current historical weight of each job in a job set to be scheduled corresponding to the job category, and quantize the weight difference to obtain a target weight of each job. If the difference value between the current historical weight and the expected weight is larger, the historical scheduling frequency of the job is less, and therefore the scheduling priority of the job can be improved through the difference value of the weights. Optionally, the target weight of the job is in a direct proportion to the weight difference, that is, the larger the weight difference is, the larger the target weight of the job is, so as to ensure that the job can be scheduled in time.
Optionally, the selection circuit 130 is further configured to set the corresponding processor locking signal to a high level according to the scheduling information, and then transmit the scheduling information to the corresponding processor; when the transmission of the scheduling information is completed, the corresponding processor lock signal is set to a low level. Specifically, the selector may transmit the scheduling information to the processor when the processor lock signal is high. Meanwhile, when the processor lock signal is at a high level, the processor can acquire all task information and resolution information of the task to which the target job belongs, and the like. Further, the second read-write control circuit is further configured to, after obtaining the scheduling information, obtain all task information of the current scheduling task to which the target job belongs. When the processor locking signal is in a high level, the second read-write control circuit can transmit the scheduling information and all task information and decomposition information of the current scheduling task of the target job to the processor.
In the embodiment of the application, the task scheduling device can optimize the task scheduling process and improve the task scheduling efficiency by adopting the scheduling mechanism.
In one embodiment, when the corresponding processor identity information in the scheduling information obtained by the task scheduling device 100 is at least one second processor body, the control device of the second processor body receives the scheduling information sent by the task scheduling device 100, and correspondingly obtains the decomposition information and all task information of the task to which the target job belongs, and divides the task into a plurality of jobs; the second processor body is used for executing the received target job according to the scheduling information.
In one embodiment, as shown in fig. 2, the task scheduling device 100 is connected to the task decomposition device 400, the task decomposition device 400 is connected to the task caching device 600 and the status monitoring device 500, and the task scheduling device 100 is connected to the task caching device 600, the task decomposition device 400 and the second processor 300. The task decomposition device is used for pre-splitting the task, decomposing the task into a plurality of jobs and obtaining decomposition information of the task. The task scheduling device is used for scheduling a plurality of jobs according to the decomposition information of the tasks and the like, determining target jobs and obtaining scheduling information.
Further, the task decomposition device is configured to obtain basic task information of the task from the task cache device 600, and obtain a task registration request of the task according to the basic task information of the task; the state monitoring device 600 is configured to allocate a task identifier to the task according to the received task registration request, and transmit the task identifier of the task back to the task assigning device, so as to complete the task registration process. In the embodiment of the application, the registered task (i.e. the task with the task identifier) can be decomposed and scheduled by the task assigning device, and further sent to the second processor for processing. Specifically, the task decomposition device 400 is configured to obtain basic task information of the task from the task cache device 600, obtain a task registration request of the task according to the basic task information of the task, and transmit the task registration request to the state monitoring device 500, where the state monitoring device 500 can allocate a task identifier to each task to complete registration of the task. When receiving the task identifier returned by the status monitoring device 500, the task decomposition device 400 can pre-split each successfully registered task (decompose the task having the task identifier into a plurality of jobs) according to the basic task information of the task having the task identifier, and obtain decomposition information of the task. In the embodiment of the application, the task pre-splitting of each task can be performed in parallel, that is, when the task identifier of the task is obtained, the task can be successfully registered, and at this time, the task can be split into a plurality of jobs by the task splitting device, so that the splitting information of the task is obtained. Thus, the processing efficiency of the task can be improved.
Meanwhile, after receiving the task identifier returned by the status monitoring device 500, the task decomposition device 400 sends a scheduling request of the task with the task identifier to the task scheduling device, so as to start a scheduling process of the task. The task scheduling device is used for acquiring the processor state information of the second processor from the second processor according to the acquired task scheduling request, acquiring the decomposition information of the task from the task decomposition device, determining the current scheduling target job according to the processor state information and the decomposition information of the task, acquiring the scheduling information, and transmitting the scheduling information to the second processor. The scheduling information may include job identification for a plurality of jobs, processor identity information corresponding to each job, and bit order information for the plurality of jobs to be processed by the corresponding processor.
Further, the task scheduling device 100 can acquire all task information of the task to which the target job belongs from the task cache device 600 according to the scheduling information, acquire the breakdown information of the task to which the target job belongs from the task breakdown device 400, and package and transmit the breakdown information of the task to which the target job belongs and all task information to the second processor 300. The second processor 300 may split the task to which the target job belongs into a plurality of jobs (the splitting process is an actual splitting process) according to the received splitting information and all task information of the task to which the target job belongs, and each split job includes information such as weight and data. Further, the second processor 300 may process the split according to the scheduling information for the target job.
In another embodiment, after obtaining the scheduling information, the task scheduling device may transmit the obtained scheduling information to the second processor, and the second processor may obtain, according to the received scheduling information, all task information of the task to which the target job belongs from the task cache device, obtain, from the task decomposition device, decomposition information of the task to which the target job belongs, and split the task to which the target job belongs into a plurality of jobs (the splitting process is an actual splitting process) according to the received decomposition information and all task information of the task to which the target job belongs, where each job includes information such as weight and data. Further, the second processor may process the split to the target job according to the scheduling information.
Alternatively, as shown in fig. 3, the task decomposition device 400 may include a second read/write control circuit 410, a registration request circuit 420, and a data splitter 430, where the second read/write control circuit 410, the registration request circuit 420, and the data splitter 430 are electrically connected in sequence, the second read/write control circuit 420 is connected to the task caching device 600, the registration request circuit 420 is connected to the status monitoring device 500, and the data splitter 430 is connected to the task scheduling device 100.
The task cache device 600 is further configured to obtain a task enable signal when a task in a wait state exists in the task cache device; the second read/write control circuit 410 is configured to obtain basic task information of a task from the task cache device 600 when receiving a task enable signal transmitted by the task cache device 600. Alternatively, the second read/write control circuit 410 may be an I/O circuit. Specifically, the task cache device 600 stores a plurality of task queues, and when there is a task in a wait state in more than one task queue, the task cache device 600 can obtain a task enable signal. For example, the task enable signal may indicate by way of a flag bit (task enable), and when the value of the flag bit is 1, it is considered that a task in a wait state exists in the task buffer device 600. When the value of the flag bit is 0, it may be considered that no task waiting for transmission exists in the task buffer device 600. The task in the wait-to-transmit state may be a task pointed to by a queue head pointer of a task queue, that is, a first task in the task queue. When receiving the task enable signal, the second read/write control circuit 410 may obtain basic task information of the task in the wait state from the task cache device 600 to register the task.
The registration request circuit 420 is configured to obtain a task registration request of a task according to basic task information of the task, transmit the task registration request to the state monitoring apparatus 500, and register the task. Wherein, the task after the successful registration (i.e. the task obtaining the task identifier) can perform the scheduling process. Further, the registration request circuit 420 is further configured to receive information, such as a task identifier, returned by the status monitoring apparatus 500, and transmit the task identifier of the task received by the registration request circuit to the data splitter 430. The data splitter 430 is configured to, when receiving a task identifier returned by the status monitoring apparatus 500, pre-split the successfully registered task according to basic information of the task, split the task into multiple jobs, and obtain split information of the task.
Optionally, the basic task information of the task includes the total number of jobs of the task and the size of each job; the total number of jobs of a task refers to the number of jobs into which the task is divided, and the size of a job refers to the data capacity of each job. The data divider can acquire the basic task information of the task, and divides the tasks into a plurality of jobs according to the total number of jobs in the basic task information of the task and the job size of each job to obtain the division information of the tasks. Alternatively, the total number of jobs for the task may be 2 n And n is a positive integer. Further, each job can be assigned to a corresponding processor for processing, and thus the size of each job can be an integer multiple of the corresponding processor word size, where the processor word size can reflect the ability of the processor to process data a single time.
In the embodiment of the application, the task with the task identifier is pre-split through the data splitter to obtain the decomposition information of the task, when the second processor processes the task, the task can be directly split into a plurality of jobs according to the decomposition information of the task, pre-splitting, registration and other pre-processing are not needed for the task, the task processing flow of the second processor is simplified, and then the second processor performs parallel processing on a plurality of jobs of the same task, so that the processing efficiency is improved.
Optionally, the task decomposition device further includes a state control circuit 440, the state control circuit 440 is connected to the registration request circuit 420 and the task scheduling device 100, and the state control circuit 440 is configured to record and update the task state of the task, so that the processing progress of the task can be known according to the task state of the task. In the embodiment of the present application, the task state in the task scheduling and processing process can be tracked and monitored by the state control circuit 440, so that the reliability of task scheduling can be ensured.
Specifically, the registration request circuit 420 may transmit the task identifier of the task to the state control circuit 440 after receiving the task identifier of the task returned by the state monitoring apparatus 500, and the state control circuit 440 may be configured to update the task state of the task with the task identifier from the wait-to-transmit state to the to-be-scheduled state after receiving the task identifier of the task. Further, the data splitter 430 may pre-split the task in the to-be-scheduled state to obtain the decomposition information of the task in the to-be-scheduled state. The registration request circuit 420 is further configured to, when receiving the task identifier returned by the state monitoring apparatus, obtain a task scheduling request corresponding to the task, and transmit the task scheduling request of the task to the task scheduling apparatus, that is, the registration request circuit 420 may transmit the task scheduling request corresponding to each task in the to-be-scheduled state to the task scheduling apparatus 100. The task scheduling device 100 may acquire information such as processor state information of the second processor 300 and task resolution information in response to a task scheduling request of the task, and start scheduling the task.
Further, the task scheduling device 100 may generate a scheduling success signal when the task in the to-be-scheduled state is successfully scheduled, and transmit the scheduling success signal to the state control circuit. The state control circuit is further configured to update the task state of the task from the state to be scheduled to the scheduling state when receiving the scheduling success signal transmitted by the task scheduling device, and then the task scheduling device may sequentially send the plurality of jobs of the task in the scheduling state to the second processor. Furthermore, the state control circuit 440 may be further configured to update the task state of the task from the scheduling state to the scheduling end state after the task is scheduled, obtain the scheduling end information of the task, and destroy the task according to the scheduling end information of the task.
Still further, the task scheduling device 100 may generate a scheduling failure signal when the task in the to-be-scheduled state fails to be scheduled, and transmit the scheduling failure signal to the state control circuit 440. The state control circuit 440 is further configured to, upon receiving the scheduling failure signal transmitted by the task scheduling device 100, set the task state of the task from the scheduling state to the state to be scheduled, so as to reschedule the task at the next scheduling. In the embodiment of the application, the deadlock phenomenon can be avoided through the scheduling failure mechanism.
Alternatively, the task scheduling device 100 may match a plurality of jobs included in each task with the second processor according to the decomposition information and the basic task information of each task in the to-be-scheduled state and the processor state information of the second processor. If more than one job of the task fails to match the second processor within a predetermined time (e.g., 128 or 256 beats), it may be determined that there is an exception in task scheduling, and the task scheduling device 100 may obtain a scheduling failure signal and transmit the scheduling failure signal to the state control circuit 440. The state control circuit 440 may update the task state of the task from the scheduling state to the to-be-scheduled state according to the scheduling failure signal, so as to reschedule the task at the next scheduling. In the embodiment of the application, the deadlock phenomenon can be avoided through the scheduling failure mechanism.
If more than one job of the task is successfully matched with the second processor 300 within the preset time, the task can be normally scheduled, and at this time, the task scheduling device 100 can obtain a scheduling success signal and transmit the scheduling success signal to the state control circuit 440. The state control circuit can update the task state of the task from the state to be scheduled to the scheduling state according to the scheduling success signal. After the task status of the task is updated to the scheduling status, the task scheduling device may start to execute the scheduling process, that is, the task scheduling device may sequentially send a plurality of jobs of the task in the scheduling status to the second processor. The state control circuit 440 may further be configured to update the task state of the task from the scheduling state to the scheduling end state after the task is scheduled, obtain scheduling end information of the task, and destroy the task according to the scheduling end information of the task.
Optionally, the task decomposition device 400 further includes a dependency processing circuit, and the dependency processing circuit is connected to the second read/write control circuit and the status monitoring device 500; the dependency processing circuit is configured to send a pre-task query request to the state monitoring device 500 when determining that the task has the pre-task according to the basic task information of the task; the state monitoring device 500 is further configured to determine whether the pre-task of the task is executed completely according to the pre-task query request, and feed back a query result to the dependency processing circuit; the dependency processing circuit is further configured to send a task registration request to the state monitoring apparatus 500 after determining that the pre-task of the task is completely executed according to the query result.
Specifically, the basic task information further includes dependency relationship information of the task, and the dependency relationship processing circuit may determine whether the current task exists in the pre-stage task according to the dependency relationship information in the basic task information, and when determining that the current task exists in the pre-stage task, send a pre-stage task query request to the state monitoring apparatus 500 to determine whether the pre-stage task of the current task is executed completely. Specifically, the state monitoring apparatus 500 is further configured to determine whether the pre-task of the task is executed completely according to the pre-task query request, and feed back a query result to the dependency processing circuit; the dependency processing circuit is further configured to send a task registration request to the status monitoring apparatus through the registration request circuit 420 after determining that the pre-task of the task is completely executed according to the query result. If the dependency processing circuit determines that the pre-task of the task is not completely executed according to the query result, the registration of the task may be suspended, that is, a corresponding task registration request may not be sent to the state monitoring apparatus 500. Therefore, the registration process of the current task is executed only after the execution of the preposed task of the current task is finished, the correctness of the execution sequence of each task can be ensured, and the accuracy and the reliability of the operation result are further ensured.
When the dependency relationship processing circuit determines that the current task does not have a pre-task according to the dependency relationship information in the basic task information of the task, the registration request circuit 420 may be invoked, and the registration request circuit 420 may obtain a task registration request according to the basic task information of the current task, and transmit the task registration request of the task to the state monitoring apparatus 500 for registration.
In one embodiment, the task cache device 600 is configured to store basic task information and all task information corresponding to a plurality of tasks; the basic task information of the task may include configuration information of the task, such as the configuration information includes a task category, a weight of the task, data of the task, and the like. The tasks can be divided into event tasks, communication tasks, data processing tasks and the like according to different functions, that is, the task types can include event tasks, communication tasks and data processing tasks, and further, the task types of the data processing tasks can also include block (blocking task), cluster (clustering task) and union (joint task). All task information of the task may include basic task information such as configuration information of the task, and information such as instructions and data corresponding to the task. Optionally, the plurality of tasks are stored in the task buffer device 600 in the form of task queues, and the plurality of tasks may form a plurality of task queues, for example, the plurality of tasks form a plurality of task queues according to their task categories. The basic task information of the task pointed to by the queue head pointer in each task queue can be transmitted to the task decomposition device 400.
In one embodiment, as shown in FIG. 4, the status monitor apparatus 500 may include a task registration circuit 510, the task registration circuit 510 being connected to the task decomposition apparatus 400, and in particular, the task registration circuit 510 being connected to the registration request circuit 420 of the task decomposition apparatus 400. The registration request circuit 420 may send a task registration request of a task to the task registration circuit 510, where the task registration circuit 510 is configured to receive the task registration request of the task, allocate a task identifier to the task according to the task registration request of the task, and transmit the task identifier of the task back to the task decomposition device 400. Wherein, the current task having obtained the task identifier can be scheduled by the task scheduling device 100 and sent to the second processor for processing after scheduling.
Alternatively, task registration circuitry 510 may be a storage device having a state table stored therein, where the state table includes a plurality of state table entries, and each state table entry corresponds to a task identifier. Specifically, each task registration request may occupy a state table entry, and a storage address or a label of the state table entry may be used as a task identifier of the task. In other embodiments, the task registration circuitry may employ other storage devices such as a stack. Further, each state table entry may include a plurality of sub-state table entries, and task registration circuitry 110 may assign one sub-state table entry to each task based on the total number of tasks for the current task. For example, the task identifier corresponding to the current task may be a Table ID, and the Table ID is used to represent the current task. If the current task is registered, a sub-state Table entry corresponding to the Table ID may be allocated to each job according to the arrangement of the jobs in the queue.
Further, the status monitoring apparatus 500 further comprises a checking circuit 520 connected to the task registering circuit 510 and a status processing circuit 530 connected to the checking circuit 520, wherein the checking circuit 520 is connected to the task decomposing apparatus 400, and the status processing circuit 530 is connected to the checking circuit 520 and the first processor 200.
Specifically, the checking circuit 520 is connected to the second read/write control circuit 410 of the task decomposition device 400, the second read/write control circuit 410 may transmit the total number of jobs included in the task to the checking circuit 520, the checking circuit 520 is configured to obtain the total number of jobs included in the task transmitted by the task decomposition device 400 and the job receiving number of the task transmitted by the second processor 300, and obtain the assignment completion instruction according to the total number of jobs and the job receiving number of the task. The dispatch complete instruction is used to indicate that the second processor has received all jobs sent by the task scheduler. Further, the check circuit 520 may communicate the dispatch complete instruction to state processing circuitry.
The status processing circuit 530 is configured to receive job end information of each job of the task according to the dispatch completion instruction and transmit the job end information of each job of the task to the first processor. In particular, when the state processing circuit 530 receives a dispatch complete instruction indicating that the second processor has received the respective job to be processed, execution of the respective job may begin. The status monitoring apparatus 500 may wait for the execution status information of each received job fed back by the second processor 300, i.e. the status processing circuit 530 may start to receive and buffer the job end information of each job in the task transmitted by the second processor. Alternatively, the state processing circuit 530 may be connected to the global memory through DMA, so that the state processing circuit 530 may write the obtained job end information of each job into the global memory, so as to transfer the job end information of each job of the current task to the first processor 200 through the global memory.
Alternatively, the checking circuit 520 may include a comparator configured to obtain the job receiving number and the preset job number of the task, compare the job receiving number of the task with the preset job number, and output an assignment completion instruction when the job receiving number of the task is equal to the preset job number, and send the assignment completion instruction to the state processing circuit.
Alternatively, the preset number of jobs in the embodiment of the present application may be the number of jobs sent to the second processor 300. At this time, the comparator is configured to determine whether the job receiving number of the task is equal to the total job number of the current task, and when the job receiving number of the task is equal to the total job number of the task, the comparator may obtain the assignment completion instruction and transmit the assignment completion instruction to the state processing circuit. And if the job receiving quantity of the task is less than the total job quantity of the task, continuing to wait until the job receiving quantity of the task is equal to the total job quantity of the task.
Optionally, in other embodiments, each task includes a plurality of jobs, and in order to reduce the number of jobs for single verification and improve the execution efficiency of the jobs, the preset number of jobs may also be smaller than the total number of jobs for the task. Specifically, the preset number of jobs for the comparator may be 2 m Wherein the value range of m is 5 to 10, and m is a positive integer. For example, the preset number of jobs may be 32, 64, 128, 512, 1024, or the like, and is not limited herein.
For example, the preset number of jobs is 128, the comparator is configured to determine whether the number of job receptions for the task is equal to 128, and when the number of job receptions for the task is equal to 128, the comparator may obtain the dispatch completion instruction and transmit the dispatch completion instruction to the state processing circuit. Meanwhile, the job receiving number of the task can be cleared, and the next cycle of processing can be performed. If the job receipt number of the task is less than 128, the waiting is continued until the job receipt number of the task is equal to 128.
Further, the preset job number may be dynamically set according to the job number of the current task. At this time, when the comparator determines that the job receiving number of the tasks is equal to the preset job number, the comparator may obtain the dispatch complete instruction and transmit the dispatch complete instruction to the state processing circuit. And if the job receiving quantity of the tasks is less than the preset job quantity, continuing to wait until the job receiving quantity of the tasks is equal to the preset job quantity of the comparator.
Optionally, the state processing circuit 530 further comprises a reorder buffer, which may be coupled to the check circuit 520 and the first processor 200, and in particular, the reorder buffer may be coupled to the first processor 200 via a global memory. The reordering buffer is configured to receive the assignment complete instruction output by the comparator of the check circuit 520, receive the job end information of each job of the current task according to the assignment complete instruction, reorder the received job end information according to a preset arrangement mode when the number of the received job end information reaches the number of the preset end information, and transmit the received job end information to the first processor 200 according to the reordered order. Alternatively, the preset arrangement may be an execution order of the respective jobs. In this way, by reordering the job end information of each job, it can be ensured that one or more jobs before the current job have all been executed and ended, and thus the reliability of the execution result of the current task can be ensured.
In one embodiment, the state processing circuit 530 further comprises a third read/write control circuit coupled to the reorder buffer. If the task is a blocking task, the third read-write control circuit is used for transmitting the operation ending information of a plurality of operations of the blocking task to the first processor after receiving that the number of the operation ending information of the blocking task is equal to the total number of the operations of the blocking task and the operation ending information of all the tasks in the previous blocking interval before the blocking interval to which the blocking task belongs is transmitted to the first processor.
Optionally, the state processing circuit 530 further includes an exception processing circuit connected to the second processor, and the task decomposition device further includes a task destroying circuit, and the exception processing circuit is connected to the task destroying circuit. Wherein, the second processor 300 is configured to transmit job end information of the job to the exception handling circuit; the exception handling circuit is used for judging whether the operation has execution exception or not according to the operation end information of the operation, acquiring a task destroying instruction when the operation has the execution exception, and transmitting the task destroying instruction to the task destroying circuit. The task destroying circuit is used for destroying and executing destroying operation according to the task destroying instruction, wherein the destroying operation comprises the step of destroying the task to which the operation with execution exception belongs and all the tasks in the task queue corresponding to the task to which the operation belongs.
Specifically, the exception handling circuit may acquire job end information of a job of the task, and determine whether there is an execution exception for the job based on the job end information of the job. And if the operation has execution abnormality, acquiring a task destroying instruction. Alternatively, the job end information of the job may include result flag data, and the exception handling circuit may determine whether there is an execution exception for the job based on the result flag data included in the job end information of the job.
For example, if the job has an execution exception, the second processor may set the result flag data in the job end information of the job to be non-0 (e.g., the result flag data is 1), and at this time, the exception handling circuit may determine whether the job has an execution exception based on the result flag data. If the job has no execution exception, the second processor may set result flag data in job end information of the job to 0, at which time the exception handling circuit may determine that the job has no execution exception based on the result flag data. Further, the exception handling circuit may obtain a task destruction instruction according to the job end information of the job, so as to notify a task destruction circuit of the task decomposition device to execute a destruction operation.
Further, the execution exception of the job may include a first exception condition and a second exception condition, and the task destruction instruction may include a first task destruction instruction corresponding to the first exception condition and a second task destruction instruction corresponding to the second exception condition. Alternatively, when it is determined that there is an abnormality in the job, the abnormality processing circuit may further determine whether the execution abnormality of the current task is a first abnormality or a second abnormality, based on abnormality flag data included in the job end information of the job. The first abnormal situation and the second abnormal situation may be a combination of one or more of the abnormalities such as the second processor resource shortage and the second processor failure.
Optionally, the exception handling circuit is configured to, when it is determined that the job has the first exception condition according to the job end information of the job, obtain a first task destruction instruction, and transmit the first task destruction instruction to the task destruction circuit, where the task destruction circuit destroys the task to which the job belongs according to the first task destruction instruction. Specifically, when receiving the first task destruction instruction, the task destruction circuit may be configured to terminate scheduling of the job with the execution exception and all jobs after the job, and obtain scheduling end information of the task to which the job belongs. Further, after the task destroying circuit completes the destroying operation of the task to which the job belongs, the task scheduling end information of the task to which the job belongs may be transmitted to the state monitoring apparatus 500.
In one embodiment, a register file is coupled to task decomposition device 400. If the exception handling circuit determines that the job has a second exception condition according to the job end information of the job, a second task destruction instruction can be obtained to destroy the task to which the job belongs and all tasks after the task to which the job belongs. Specifically, if the exception handling circuit determines that the job has a second exception condition according to the job end information of the job, a second task destruction instruction may be obtained, and the second task destruction instruction is transmitted to the task destruction circuit, so as to notify the task destruction circuit to destroy the task to which the job belongs and all the tasks after the task. Optionally, after the task destroying circuit receives the second task destroying instruction transmitted by the exception handling circuit, the task destroying circuit may destroy all tasks in the task queue where the task to which the job belongs is located. Specifically, the task decomposition device terminates the scheduling of the task to which the job belongs and other tasks subsequent to the task to which the job belongs according to the second task destruction instruction, and notifies a register connected to the task decomposition device to clear the task to which the job belongs. After the task to which the job belongs is cleared from the register, scheduling end information of the task to which the job belongs may be obtained.
Meanwhile, after the task to which the job belongs is cleared from the register, the task decomposition device may send a task registration request corresponding to another task after the task to which the job belongs to the status monitoring device 500, so as to obtain a task identifier corresponding to another task after the task to which the job belongs. Task registration circuitry 510 of condition monitoring apparatus 500 may each assign a task identification to a task that is subsequent to the task to which the job belongs. When the task destroying circuit receives the task identifier fed back by the task registering circuit 510 of the state monitoring device 500, the task destroying circuit may obtain, according to the received task identifier, scheduling end information corresponding to other tasks after the task to which the job belongs, so as to destroy all tasks after the task to which the job belongs. Further, the task decomposition device 400 may also transmit scheduling end information of each to-be-processed task to the status monitoring device 500.
By setting the exception handling mechanism, the accuracy of the task execution result can be ensured. And when the abnormal condition exists, the state monitoring device can inform the task destroying circuit to destroy the corresponding task and/or all the tasks after the corresponding task, so that the resource waste caused by the fact that the second processor continues to execute other tasks when the abnormal condition exists is avoided.
Optionally, the state control circuit 440 of the task decomposition device 400 is further configured to obtain a first interrupt signal when receiving the task destruction instruction, transmit the first interrupt signal to the first processor 200, and then perform the destruction operation. Specifically, when the task destruction circuit receives the task destruction instruction, the task scheduling to which the job belongs is terminated first, so that unnecessary resources are prevented from being consumed by scheduling under an abnormal condition. Meanwhile, after the task destruction circuit receives the task destruction instruction, a first interrupt signal may be obtained and transmitted to the first processor 200. Further, after the first processor 200 receives the first interrupt signal, the first processor 200 may further obtain the state information of each second processor body 310, and determine the second processor body with the exception according to the state information of each second processor body 310.
The state control circuit 440 of the task decomposition device 400 is further configured to obtain a second interrupt signal after completing the destruction operation, and transmit the second interrupt signal to the first processor 200. Specifically, after receiving the scheduling end information of the current task, or after receiving the scheduling end information of the current task and all tasks in the task queue to which the current task belongs, the state monitoring device obtains the exception handling end information, and transmits the exception handling end information to the task decomposition device 400; the task destruction circuit of the task decomposition device 400 is further configured to obtain a second interrupt signal according to the exception handling end information, and transmit the second middle-stage signal to the first processor 200.
In one embodiment, a plurality of tasks are stored in the task buffer device 600 in the form of task queues, and the task buffer device is further configured to monitor the queue status of each task queue. When the queue head pointer and the queue tail pointer of the task queue are different and the remaining storage space of the task cache device is greater than zero, the task cache device can send a data reading request to the global memory so as to store a new task into the task cache device.
Further, the task decomposition device 400 is further configured to send a task release request to the task caching device 600 after the task is scheduled to be finished or destroyed; the task buffer device 600 is further configured to release the task according to the task release request, and accumulate the queue head pointer of the task queue where the task is located once, so as to update the queue head task of the task queue where the task is located. In the embodiment of the application, the scheduled tasks or the destroyed tasks are released, so that the scheduled tasks or the destroyed tasks can be prevented from occupying the storage space of the task cache device, and the space utilization rate of the task cache device is improved.
Meanwhile, the embodiment of the application also provides a task processing system, which comprises a first processor, a global memory, a task decomposition device, a task scheduling device, a state monitoring device and a second processor, wherein the task scheduling device is respectively connected with the first processor, the second processor and the task decomposition device, and the global memory is respectively connected with the task decomposition device, the task scheduling device, the state monitoring device, the first processor and the second processor. Alternatively, the global Memory may be a DRAM (Dynamic Random Access Memory) or an SRAM (Static Random-Access Memory), and the like. For the working principle of the task processing system in the embodiment of the present application, reference may be specifically made to the description above, and details are not described here again.
Further, the control device of the second processor body is also used for allocating a job identifier to the target job when receiving the scheduling information; and when receiving the information fed back by all the second processor bodies corresponding to the target operation, acquiring the operation ending information of the target operation, transmitting the operation ending information of the target operation to the state monitoring device, and then destroying the operation identification corresponding to the target operation.
Furthermore, the target job may correspond to more than one second processor entity, and the connection relationship and structure of each second processor entity can be referred to the following description. The control device of the second processor body is also used for marking the execution state of the target job as abnormal execution when the information fed back by more than one second processor body corresponding to the target job has abnormality, adding the execution state of the target job to the job end information of the target job, and transmitting the job end information of the target job to the state monitoring device. The state monitoring device can obtain the task destruction instruction according to the execution state of the target operation in the operation end information of the target operation.
Optionally, the plurality of second processor bodies form a plurality of processor clusters, and each processor cluster is correspondingly provided with shared storage; each second processor body in each processor cluster is connected to the shared storage corresponding to the processor cluster. Furthermore, all the shared storages are connected with each other, and all the shared storages are connected with the task scheduler and/or the global memory. In the embodiment of the application, the shared storage is set, the situation that the global memory needs to be called every time of data reading and writing is avoided, and the reading and writing bandwidth of the global memory can be saved.
Optionally, the task processing system further includes a plurality of DMA (Direct Memory Access) connected to the shared storage; each second processor body in the processor cluster is connected to the shared storage corresponding to the processor cluster through DMA; and each shared memory is connected with the task scheduler and/or the global memory through the DMA, and each shared memory is connected with each other through the DMA. In the embodiment of the application, by setting the plurality of DMA, data access and storage among different hardware devices can be realized, and an interrupt program is not required to be set, so that the processing efficiency of the system can be improved.
As a further improvement, the task processing system further includes an interconnection module, and the first processor, the second processor, the global memory, and the task scheduler are all connected to the interconnection module, such as a network ON Chip (network ON Chip). Alternatively, the interconnection module may be a binary tree interconnection module or a 2D-mesh interconnection module. Furthermore, the number of the second processors is more than one, and the more than one second processors are all connected to the interconnection module; the number of the task schedulers is more than one, and the more than one task schedulers are all connected to the interconnection module. In this way, by interconnecting the module with more than one task scheduler and the second processor, the scalability of the task processing system can be improved to meet different requirements.
In one embodiment, the second processor body 310 includes a computing device as shown in fig. 5, the computing device including: a controller unit 11 and an arithmetic unit 12, wherein the controller unit 11 is connected with the arithmetic unit 12, and the arithmetic unit 12 comprises: a master processing circuit and a plurality of slave processing circuits.
Specifically, the controller unit 11 may be configured to obtain a job, which may include data, a machine learning model, and computational instructions. In an alternative, the input data and the calculation instruction may be obtained through a data input/output unit, and the data input/output unit may be one or more data I/O interfaces or I/O pins.
The above calculation instructions include, but are not limited to: the present invention is not limited to the specific expression of the above-mentioned computation instruction, such as a convolution operation instruction, or a forward training instruction, or other neural network operation instruction.
The controller unit 11 is further configured to analyze the calculation instruction to obtain a plurality of operation instructions, and send the plurality of operation instructions and the input data to the main processing circuit;
a master processing circuit 101 configured to perform a preamble process on the input data and transmit data and an operation instruction with the plurality of slave processing circuits;
a plurality of slave processing circuits 102 configured to perform an intermediate operation in parallel according to the data and the operation instruction transmitted from the master processing circuit to obtain a plurality of intermediate results, and transmit the plurality of intermediate results to the master processing circuit;
and the main processing circuit 101 is configured to perform subsequent processing on the plurality of intermediate results to obtain a calculation result of the calculation instruction.
The technical scheme that this application provided sets the arithmetic element to a main many slave structures, to the computational instruction of forward operation, it can be with the computational instruction according to the forward operation with data split, can carry out parallel operation to the great part of calculated amount through a plurality of processing circuits from like this to improve the arithmetic speed, save the operating time, and then reduce the consumption.
Optionally, the machine learning calculation specifically includes: the artificial neural network operation, where the input data specifically includes: neuron data and weight data are input. The calculation result may specifically be: the result of the artificial neural network operation outputs neuron data.
In the forward operation, after the execution of the artificial neural network of the previous layer is completed, the operation instruction of the next layer takes the output neuron calculated in the operation unit as the input neuron of the next layer to perform operation (or performs some operation on the output neuron and then takes the output neuron as the input neuron of the next layer), and at the same time, the weight value is replaced by the weight value of the next layer; in the reverse operation, after the reverse operation of the artificial neural network of the previous layer is completed, the operation instruction of the next layer takes the input neuron gradient calculated in the operation unit as the output neuron gradient of the next layer to perform operation (or performs some operation on the input neuron gradient and then takes the input neuron gradient as the output neuron gradient of the next layer), and at the same time, the weight value is replaced by the weight value of the next layer.
The above-described machine learning calculations may also include support vector machine operations, k-nearest neighbor (k-nn) operations, k-means (k-means) operations, principal component analysis operations, and the like. For convenience of description, the following takes artificial neural network operation as an example to illustrate a specific scheme of machine learning calculation.
For artificial neural netAnd (3) performing a network operation, wherein if the artificial neural network operation has a plurality of layers of operations, the input neurons and the output neurons of the multilayer operations do not refer to the neurons in the input layer and the neurons in the output layer of the whole neural network, but for any two adjacent layers in the network, the neurons in the lower layer of the network forward operation are the input neurons, and the neurons in the upper layer of the network forward operation are the output neurons. Taking the convolutional neural network as an example, let a convolutional neural network have
Figure 799967DEST_PATH_IMAGE005
A layer of a material selected from the group consisting of,
Figure 326764DEST_PATH_IMAGE006
for the first
Figure 464484DEST_PATH_IMAGE007
Layer and the first
Figure 723427DEST_PATH_IMAGE008
Layer to layer, we will
Figure 802723DEST_PATH_IMAGE007
A layer called an input layer, in which the neuron is said input neuron, the second
Figure 8576DEST_PATH_IMAGE008
A layer is referred to as an output layer, where the neuron is the output neuron. That is, each layer except the topmost layer can be used as an input layer, and the next layer is a corresponding output layer.
Optionally, the computing apparatus may further include: the storage unit 10 and the direct memory access unit 50, the storage unit 10 may include: one or any combination of a register and a cache, specifically, the cache is used for storing the calculation instruction; the register is used for storing the input data and a scalar; the cache is a scratch pad cache. The direct memory access unit 50 is used to read or store data from the storage unit 10.
Optionally, the controller unit includes: an instruction cache unit 110', an instruction processing unit 111, and a store queue unit 113;
the instruction cache unit 110' is configured to store the calculation instruction associated with the artificial neural network operation.
The instruction processing unit 111 is configured to analyze the calculation instruction to obtain a plurality of operation instructions.
A store queue unit 113 for storing an instruction queue, the instruction queue comprising: and a plurality of operation instructions or calculation instructions to be executed according to the front and back sequence of the queue.
For example, in an alternative embodiment, the main processing circuit may also include a controller unit, and the controller unit may include a main instruction processing unit, specifically configured to decode an instruction into a microinstruction. Of course in another alternative the slave processing circuit may also comprise a further controller unit comprising a slave instruction processing unit, in particular for receiving and processing microinstructions. The micro instruction may be a next-stage instruction of the instruction, and the micro instruction may be obtained by splitting or decoding the instruction, and may be further decoded into control signals of each component, each unit, or each processing circuit.
In one alternative, the structure of the calculation instruction may be as shown in the following table.
Figure 797541DEST_PATH_IMAGE009
The ellipses in the above table indicate that multiple registers or immediate numbers may be included.
In another alternative, the computing instructions may include: one or more operation domains and an opcode. The computation instructions may include neural network operation instructions. Taking the neural network operation instruction as an example, as shown in table 1, register number 0, register number 1, register number 2, register number 3, and register number 4 may be operation domains. Each of register number 0, register number 1, register number 2, register number 3, and register number 4 may be a number of one or more registers.
Figure 555281DEST_PATH_IMAGE010
The register may be an off-chip memory, but in practical applications, the register may also be an on-chip memory for storing data, and the data may specifically be n-dimensional data, where n is an integer greater than or equal to 1, for example, when n =1, the data is 1-dimensional data, that is, a vector, when n =2, the data is 2-dimensional data, that is, a matrix, and when n =3 or greater, the data is a multidimensional tensor.
Optionally, the controller unit may further include:
the dependency relationship processing unit 108 is configured to, when there are multiple operation instructions, determine whether a first operation instruction has an association relationship with a zeroth operation instruction before the first operation instruction, if the first operation instruction has an association relationship with the zeroth operation instruction, cache the first operation instruction in the instruction storage unit, and after the zeroth operation instruction is executed, extract the first operation instruction from the instruction storage unit and transmit the first operation instruction to the operation unit;
the determining whether the first operation instruction has an association relationship with a zeroth operation instruction before the first operation instruction comprises:
extracting a first storage address interval of required data (such as a matrix) in the first operation instruction according to the first operation instruction, extracting a zeroth storage address interval of the required matrix in the zeroth operation instruction according to the zeroth operation instruction, if the first storage address interval and the zeroth storage address interval have an overlapped area, determining that the first operation instruction and the zeroth operation instruction have an association relation, and if the first storage address interval and the zeroth storage address interval do not have an overlapped area, determining that the first operation instruction and the zeroth operation instruction do not have an association relation.
In another alternative embodiment, the arithmetic unit 12 may include a master processing circuit 101 and a plurality of slave processing circuits 102, as shown in fig. 6. In one embodiment, as shown in FIG. 6, a plurality of slave processing circuits are distributed in an array; each slave processing circuit is connected with other adjacent slave processing circuits, the master processing circuit is connected with k slave processing circuits in the plurality of slave processing circuits, and the k slave processing circuits are as follows: it should be noted that, as shown in fig. 6, the K slave processing circuits include only the n slave processing circuits in the 1 st row, the n slave processing circuits in the m th row, and the m slave processing circuits in the 1 st column, that is, the K slave processing circuits are slave processing circuits directly connected to the master processing circuit among the plurality of slave processing circuits.
And the K slave processing circuits are used for forwarding data and instructions between the main processing circuit and the plurality of slave processing circuits.
Optionally, as shown in fig. 7, the main processing circuit may further include: one or any combination of the conversion processing circuit 110 ″, the activation processing circuit 111, and the addition processing circuit 112;
conversion processing circuitry 110 "for performing an interchange between the first data structure and the second data structure (e.g., conversion of continuous data to discrete data) on the data blocks or intermediate results received by the main processing circuitry; or to perform an interchange between the first data type and the second data type (e.g. a conversion of a fixed point type to a floating point type) on a data block or an intermediate result received by the main processing circuitry.
And an activation processing circuit 111 for executing an activation operation of data in the main processing circuit.
And an addition processing circuit 112 for performing addition operation or accumulation operation.
The master processing circuit is configured to determine that the input neuron is broadcast data, determine that a weight is distribution data, distribute the distribution data into a plurality of data blocks, and send at least one data block of the plurality of data blocks and at least one operation instruction of the plurality of operation instructions to the slave processing circuit;
the plurality of slave processing circuits are used for executing operation on the received data blocks according to the operation instruction to obtain an intermediate result and transmitting the operation result to the main processing circuit;
and the main processing circuit is used for processing the intermediate results sent by the plurality of slave processing circuits to obtain the result of the calculation instruction and sending the result of the calculation instruction to the controller unit.
The slave processing circuit includes: a multiplication processing circuit.
The multiplication processing circuit is used for executing multiplication operation on the received data block to obtain a product result.
And forwarding processing circuitry (optional) for forwarding the received data block or product result.
And the accumulation processing circuit is used for performing accumulation operation on the product result to obtain the intermediate result.
In another embodiment, the operation instruction is a matrix by matrix instruction, an accumulation instruction, an activation instruction, or the like.
The following describes a specific calculation method of the calculation apparatus shown in fig. 5 by a neural network operation instruction. For a neural network operation instruction, the formula that actually needs to be executed may be:
Figure 62486DEST_PATH_IMAGE011
wherein the weight value is
Figure 806451DEST_PATH_IMAGE012
Multiplying by input data
Figure 777818DEST_PATH_IMAGE013
Summing and then adding an offset
Figure 378564DEST_PATH_IMAGE014
Doing post-activation operation
Figure 310747DEST_PATH_IMAGE015
To obtain the final output result
Figure 983037DEST_PATH_IMAGE016
In an alternative embodiment, as shown in fig. 8, the arithmetic unit comprises: a tree module 40, said tree module comprising: a root port 401 and a plurality of branch ports 404, wherein the root port of the tree module is connected with the main processing circuit, and the branch ports of the tree module are respectively connected with one of the plurality of slave processing circuits;
the tree module has a transceiving function, for example, as shown in fig. 8, the tree module is a transmitting function, and as shown in fig. 9, the tree module is a receiving function.
And the tree module is used for forwarding data blocks, weights and operation instructions between the main processing circuit and the plurality of slave processing circuits.
Optionally, the tree module is an optional result of the computing device, and may include at least 1 layer of nodes, where the nodes are line structures with forwarding function, and the nodes themselves may not have computing function. If the tree module has zero-level nodes, the tree module is not needed.
Optionally, the tree module may have an n-ary tree structure, for example, a binary tree structure as shown in fig. 10, or may have a ternary tree structure, where n may be an integer greater than or equal to 2. The present embodiment is not limited to the specific value of n, the number of layers may be 2, and the slave processing circuit may be connected to nodes of other layers than the node of the penultimate layer, for example, the node of the penultimate layer shown in fig. 10.
Optionally, the arithmetic unit may carry a separate cache, as shown in fig. 11, and may include: a neuron buffer unit, the neuron buffer unit 63 buffers the input neuron vector data and the output neuron value data of the slave processing circuit.
As shown in fig. 12, the arithmetic unit may further include: and a weight buffer unit 64, configured to buffer weight data required by the slave processing circuit in the calculation process.
In an alternative embodiment, the arithmetic unit 12, as shown in fig. 13, may include a branch processing circuit 103; the specific connection structure is shown in fig. 13, wherein,
the main processing circuit 101 is connected to branch processing circuit(s) 103, the branch processing circuit 103 being connected to one or more slave processing circuits 102;
a branch processing circuit 103 for executing data or instructions between the forwarding main processing circuit 101 and the slave processing circuit 102.
In an alternative embodiment, taking the fully-connected operation in the neural network operation as an example, the process may be: y = f (wx + b), where x is an input neuron matrix, w is a weight matrix, b is a bias scalar, and f is an activation function, and may specifically be: sigmoid function, tanh, relu, softmax function. Here, a binary tree structure is assumed, and there are 8 slave processing circuits, and the implementation method may be:
the controller unit acquires an input neuron matrix x, a weight matrix w and a full-connection operation instruction from the storage unit, and transmits the input neuron matrix x, the weight matrix w and the full-connection operation instruction to the main processing circuit;
the main processing circuit determines the input neuron matrix x as broadcast data, determines the weight matrix w as distribution data, divides the weight matrix w into 8 sub-matrixes, then distributes the 8 sub-matrixes to 8 slave processing circuits through a tree module, broadcasts the input neuron matrix x to the 8 slave processing circuits,
the slave processing circuit executes multiplication and accumulation operation of the 8 sub-matrixes and the input neuron matrix x in parallel to obtain 8 intermediate results, and the 8 intermediate results are sent to the master processing circuit;
and the main processing circuit is used for sequencing the 8 intermediate results to obtain a wx operation result, executing offset b operation on the operation result, executing activation operation to obtain a final result y, sending the final result y to the controller unit, and outputting or storing the final result y into the storage unit by the controller unit.
The method for executing the neural network forward operation instruction by the computing apparatus shown in fig. 5 may specifically be:
the controller unit extracts the neural network forward operation instruction, the operation domain corresponding to the neural network operation instruction and at least one operation code from the instruction storage unit, transmits the operation domain to the data access unit, and sends the at least one operation code to the operation unit.
The controller unit extracts the weight w and the offset b corresponding to the operation domain from the storage unit (when b is 0, the offset b does not need to be extracted), transmits the weight w and the offset b to the main processing circuit of the arithmetic unit, extracts the input data Xi from the storage unit, and transmits the input data Xi to the main processing circuit.
The main processing circuit determines multiplication operation according to the at least one operation code, determines input data Xi as broadcast data, determines weight data as distribution data, and splits the weight w into n data blocks;
the instruction processing unit of the controller unit determines a multiplication instruction, an offset instruction and an accumulation instruction according to the at least one operation code, and sends the multiplication instruction, the offset instruction and the accumulation instruction to the master processing circuit, the master processing circuit sends the multiplication instruction and the input data Xi to a plurality of slave processing circuits in a broadcasting mode, and distributes the n data blocks to the plurality of slave processing circuits (for example, if the plurality of slave processing circuits are n, each slave processing circuit sends one data block); and the plurality of slave processing circuits are used for performing multiplication operation on the input data Xi and the received data block according to the multiplication instruction to obtain an intermediate result and sending the intermediate result to the master processing circuit, the master processing circuit performs accumulation operation on the intermediate result sent by the plurality of slave processing circuits according to the accumulation instruction to obtain an accumulation result, performs biasing b on the accumulation result according to the biasing instruction to obtain a final result, and sends the final result to the controller unit.
In addition, the order of addition and multiplication may be reversed.
According to the technical scheme, multiplication and offset operation of the neural network are achieved through one instruction, namely the neural network operation instruction, storage or extraction is not needed in the intermediate result of the neural network calculation, and storage and extraction operations of intermediate data are reduced, so that the method has the advantages of reducing corresponding operation steps and improving the calculation effect of the neural network.
Fig. 14 is a flowchart illustrating steps of a task scheduling method according to an embodiment, where the task scheduling method includes:
step S1000: and acquiring decomposition information and all task information of the tasks and state information of the processor.
Specifically, the task scheduling device 100 obtains the decomposition information of the task from the task decomposition device 400, correspondingly obtains all the task information from the task cache device 600, and obtains the processor state information from the control device 320 of the second processor body. The decomposition information of the task may include the number of jobs into which the task is decomposed, size information of each job, and the like. The processor state information of the processor may include information of the type of the processor, operation state information of the processor (whether the processor is idle), and processing capability of the processor. Step S2000: and respectively matching each job of the task with a processor according to the decomposition information and all task information of each task and the state information of the processor, and adding the job successfully matched with the processor to a job set to be scheduled.
Specifically, the task scheduling device 100 matches each job of the task with the processor according to the decomposition information and all task information of each task and the state information of the processor, and adds the job successfully matched with the processor to the job set to be scheduled. More specifically, the task scheduling device 100 first obtains processor information (such as processor type) required for each job of the task and size information of each job based on the overall task information and the task resolution information of the task, and then obtains information such as processing capability of the processor required for each job based on the size of each job. In this way, the task scheduling device 100 can match each job of the task with the processor based on the entire task information and the task breakdown information of the task, and the processor state information. Further, if the job is successfully matched with the processor, the matching circuit can further obtain information such as a processor identifier of the processor matched with the job.
Optionally, if the task scheduling device determines that all jobs of the same task are successfully matched with the processor, each job of the task is added to the job set to be scheduled. Of course, in other embodiments, the task scheduling device may determine that a job is successfully matched with a processor, that is, the job that is successfully matched with the processor is added to the job set to be scheduled; when the scheduling failure signal of the task to which the job belongs is obtained, all jobs of the task which fails to be scheduled can be deleted from the job set to be scheduled.
Step S3000: and selecting target jobs from the job set to be scheduled according to the target weight of each job in the job set to be scheduled to obtain scheduling information, wherein the scheduling information is used for determining the execution sequence of the jobs on a processor.
Wherein the scheduling information is used to determine an execution order of the jobs on the processors. Specifically, the task scheduling device 100 selects a target job from the job set to be scheduled according to the target weight of each job in the job set to be scheduled, and obtains scheduling information. The target weight of each job in the job set to be scheduled may be obtained by calculation, and of course, the target weight of each job in the job set to be scheduled may also be preset.
Optionally, the task scheduling apparatus 100 determines the scheduling priority of each job according to the target weight of each job in the job set to be scheduled and according to the target weight of each job in the job set to be scheduled, that is, the task scheduling apparatus 100 may sort each job according to the target weight of each job in the job set to be scheduled, to obtain the scheduling priority of each job, and then according to the scheduling priority of each job, use the job with the highest scheduling priority in the job set to be scheduled as the target job, to obtain the scheduling information. The job with the highest scheduling priority may be the job with the largest target weight, that is, the target job is the job with the largest target weight in the job set to be scheduled. Therefore, the task with the maximum target weight is scheduled preferentially, so that the target task can preempt the processor resource preferentially, and the task scheduling process can be optimized. According to the task scheduling method provided by the embodiment, the processor is matched with each job of the task according to the decomposition information and all task information of the task and the state information of the processor to obtain a job set to be scheduled, so that the job distributed to the processor can be processed in time after the scheduling is finished, then the target job is selected from the job set to be scheduled according to the target of the job to obtain the scheduling information, and the job with high weight can be ensured to occupy the processor resources, so that the task scheduling method can improve the processing efficiency of the processor.
In one embodiment, the task scheduling method further includes:
and if more than one job in the task is not successfully matched with the processor within the preset time, acquiring a scheduling failure signal of the task.
Specifically, when matching the processor for each job of the task, the task scheduling device 100 acquires a scheduling failure signal of the task if one or more jobs of the task are not successfully matched with the processor within a preset time (e.g., 128 beats or 256 beats). The scheduling failure signal may be transmitted to the task decomposition device 400, which receives the scheduling failure signal and re-initiates the scheduling of the task.
According to the scheduling exception handling method, more than one job in the task and the processor are not matched with the processor, and the scheduling is carried out again by generating the scheduling failure signal, so that the task deadlock is avoided.
In one embodiment, as shown in fig. 15, step S3000 includes:
step S3100a: and determining the scheduling priority of each job according to the target weight of each job in the job set to be scheduled.
Specifically, the task scheduling device 100 determines the scheduling priority of each job according to the target weight of each job in the job set to be scheduled. The target weight of each job in the job set to be scheduled may be obtained by calculation, and of course, the target weight of each job in the job set to be scheduled may also be preset.
Step S3200a: and according to the scheduling priority of each job, taking the job with the highest scheduling priority in the job set to be scheduled as the target job.
Specifically, the task scheduling device 100 takes the job with the highest scheduling priority in the job set to be scheduled as the target job according to the scheduling priority of each job.
The method for determining the target operation can ensure that the operation with high target weight occupies the processor resource.
In another optional embodiment, as shown in fig. 16, when the number of the sets of jobs to be scheduled is more than one, each set to be scheduled stores jobs of the same job category, and optionally, the job category of each job may be the same as the task category of the task to which the job belongs. At this time, step S3000 includes:
step S3100b: and determining the target weight of each job in each job set to be scheduled according to the expected weight and the current historical weight of a plurality of jobs in each job set to be scheduled.
Specifically, the task scheduling device 100 determines the target weight of each job in each set of jobs to be scheduled according to the expected weight and the current historical weight of a plurality of jobs in each set of jobs to be scheduled. The target weight of each job in the job set to be scheduled may be obtained by calculation, and of course, the target weight of each job in the job set to be scheduled may also be preset.
Step S3200b: and taking the operation with the maximum target weight in each operation set to be scheduled as a pre-emission operation of a corresponding operation type.
Specifically, the task scheduling device 100 takes the job with the largest target weight in each job set to be scheduled as the pre-launch job of the corresponding job category.
Step S3300b: and determining the target operation according to the target weight of each pre-transmitting operation.
Specifically, the task scheduling device 100 determines the target job according to the target weight of each of the pre-transmission jobs. Alternatively, the task scheduling device 100 may compare the target weights of the respective pre-transmission jobs, and take the pre-transmission job with the largest target weight as the target job. If the target weights of the pre-launch jobs are the same, the selector may determine the target job based on the desired weight of the pre-launch jobs. For example, when the target weights of the respective pre-transmission jobs are the same, the task scheduler 100 may take the pre-transmission job whose desired weight is the largest as the target job.
For example, the task categories of the tasks may be block (blocking task), cluster (clustering task), and union (join task), the job categories of the jobs included in the blocking task are blocking jobs, and are abbreviated as job category B; the operation type of the operation contained in the clustering task is clustering operation and is abbreviated as operation type C; the job type of the job included in the join task is a join job, and is abbreviated as a job type U. Wherein,
the job set one to be scheduled corresponding to the job category U may be represented as follows:
Figure 746594DEST_PATH_IMAGE017
the task scheduling device 100 may calculate the target weight TU1 of the obtained job according to the expected weight WU1 of the job 1 and the current history weight HU1, and similarly, the task scheduling device 100 may calculate the target weights of the obtained jobs 2 to n. Further, the task scheduling device 100 may sort the target weights of job 1 to job n, and set the job with the largest target weight among the job 1 to job n as a pre-launch job. For example, the pre-launch job of the job set to be scheduled is job 1.
The job set two to be scheduled corresponding to the job category B may be represented as follows:
Figure 987082DEST_PATH_IMAGE018
here, task scheduler 100 may calculate target weight TB1 of job 1 according to desired weight WB1 and current history weight HB1 of job 1, and similarly, task scheduler 100 may calculate target weights of job 2 to job m. Further, the task scheduling device 100 may sort the target weights of job 1 to job m, and set the job with the largest target weight among job 1 to job m as a pre-launch job. For example, the pre-launch job of the second job set to be scheduled is job 2.
The job set three to be scheduled corresponding to the job category C may be represented as follows:
Figure 265617DEST_PATH_IMAGE019
among them, the task scheduling device 100 can calculate the target weight TC1 of the obtained job based on the desired weight WC1 of the job 1 and the current history weight HC1, and similarly, the task scheduling device 100 can calculate the target weights of the obtained jobs 2 to k. Further, the task scheduling device 100 may sort the target weights of job 1 to job k, and set the job with the largest target weight among the jobs 1 to job k as a pre-launch job. For example, the pre-launch job of the second set of jobs to be scheduled is job 3.
After that, the task scheduling device 100 specifies a target job from the above-described 3 pre-launch jobs. Specifically, if TU1 is greater than TB2 and TB2 is greater than TC3, then job 1 in the to-be-scheduled job set may be taken as the target job. If TU1, TB2, and TC3 are equal, WU1, WB2, and WC3 can be compared in size. If WU1 is larger than WB2, and WB2 is larger than WC3, then job 1 in the first set of jobs to be scheduled may be the target job.
Optionally, the desired weight of the job is a desired weight of a task to which the job belongs. For example, in a scheduling job set, job 1 and job 2 may belong to the same task, job 3 and job 4 may belong to the same task, and job n may belong to another task. The desired weights of job 1 and job 2 are equal to the desired weight WB1 of the task to which they belong, and the desired weights of job 3 and job 4 are equal to the desired weight WB2 of the task to which they belong. Of course, in other embodiments, the desired weights for jobs in the same task may not be the same.
According to the method for scheduling the tasks according to the types of the jobs (tasks), the processor resources can be utilized more reasonably, and the scheduling efficiency is improved.
In one embodiment, as shown in fig. 17, step S310b includes:
step S3110b: and correspondingly obtaining the expected weight of each job in each job set to be scheduled according to the configuration weight of each job in each job set to be scheduled and the total configuration weight of a plurality of jobs in each job set to be scheduled.
Specifically, the task scheduling device 100 correspondingly obtains the expected weight of each job in each job set to be scheduled according to the configuration weight of each job in each job set to be scheduled and the total configuration weight of a plurality of jobs in each job set to be scheduled. The configuration weight of each job may be an initial weight of each job, which is included in basic task information of a task to which the job belongs. The desired weight for the job may be equal to a ratio of the configuration weight for the job to the total configuration weight in the set of jobs to be scheduled.
Optionally, the configuration weight of each job in each job set to be scheduled is the configuration weight of the task to which the job belongs, that is, the configuration weights of the jobs in the same task are the same. At this time, the task scheduling device 100 only needs to calculate the expected weight of each job according to the configuration weight of the task to which the job belongs and the total configuration weight of the plurality of tasks in the job set to be scheduled. I.e. the desired weight of the job may be equal to the ratio of the configuration weight of the task to which the job belongs to the total configuration weight of the plurality of tasks in the set of jobs to be scheduled.
Bearing the above example, for example, n jobs in the first set of jobs to be scheduled may belong to three tasks, task 1, task 2, and task 3, where the configuration weight of task 1 is denoted as S1, the configuration weight of task 2 is denoted as S2, and the configuration weight of task 3 is denoted as S3, and then the desired weight WU1 of the task 1 may be equal to S1/(S1 + S2+ S3). Similarly, the desired weight WU2 for task 2 may be equal to S2/(S1 + S2+ S3). And recording the expected weight of each job in the job set to be scheduled as the expected weight of the task to which the job belongs. The calculation method of the expected weight of each job in the job set two to be scheduled and the job set three to be scheduled is similar to the above method, and is not described herein again.
Step S31200b: and obtaining the current historical weight corresponding to each job in each job set to be scheduled according to the expected weight of each job in each job set to be scheduled.
Specifically, the task scheduling device 100 obtains the current historical weight corresponding to each job in each job set to be scheduled according to the expected weight of each job in each job set to be scheduled.
Step S31300b: and calculating a weight difference value between the expected weight of each job in each job set to be scheduled and the current historical weight, and obtaining the target weight of each job according to the weight difference value.
Specifically, the task scheduling device 100 calculates a weight difference between an expected weight and a current historical weight of each job in each job set to be scheduled, and obtains a target weight of each job according to the weight difference.
According to the method, the target weight of the job is obtained according to the historical weight and the expected weight of the job, the target weight is applied to scheduling and referring to historical scheduling information, and the configuration weight of the current job is considered, so that the obtained result is more reasonable.
In one optional embodiment, the configuration weight of each job in each job set to be scheduled is the configuration weight of the task to which the job belongs, and the desired weight of the job is the desired weight of the task to which the job belongs.
Specifically, the task scheduling device 100 sets the configuration weight of each job in each set of jobs to be scheduled as the configuration weight of the task to which the job belongs, and sets the desired weight of the job as the desired weight of the task to which the job belongs.
In one optional embodiment, as shown in fig. 18, step S3120b includes:
step S3121b: and determining a delay factor corresponding to each job according to the expected weight of each job in each job set to be scheduled.
Specifically, the task scheduling device 100 determines a delay factor corresponding to each job according to the expected weight of each job in each job set to be scheduled and a preset mapping relationship between the expected weight and the delay factor.
Step S3122b: and obtaining the current historical weight of the operation according to the initial historical weight of each operation in each operation set to be scheduled and the delay factor corresponding to the operation.
Specifically, the task scheduling device 100 obtains the current historical weight of each job in each job set to be scheduled according to the initial historical weight of the job and the delay factor corresponding to the job.
Wherein, the initial history weight of each job can be the configuration weight of each job or the history weight in the last scheduling process.
The implementation introduces a delay factor, adjusts the scheduling process, and can schedule certain jobs with lower weight in time.
In one embodiment, step S3122b includes: and determining a delay factor corresponding to each job according to the expected weight of each job and a preset mapping relation.
For example, the mapping relationship that can be preset is shown in the following table:
Figure 413702DEST_PATH_IMAGE020
as can be seen from the above table, the larger the desired weight, the smaller the retardation factor. I.e., the greater the expected weight of a job, the higher its scheduling priority may be.
In one embodiment, step S3122b includes two cases:
in the first case, if none of the jobs of a certain task is selected as the target job in the current scheduling, the ratio of the initial history weight of each job of the certain task to the delay factor is used as the adjustment factor of each job, and the difference value between the initial history weight of each job and the corresponding adjustment factor is used as the current history weight of each job.
And in the second situation, if the job of a certain task is selected as the target job in the scheduling, taking the ratio of the initial history weight of each job of the certain task to the delay factor as a first adjustment factor of each job, taking the ratio of the maximum value of the delay factor to the delay factor corresponding to each job as a second adjustment factor of each job, and calculating to obtain the current history weight according to the initial history weight of each job, the first adjustment factor and the second adjustment factor.
In an optional embodiment, after a plurality of jobs of the same task are newly added to a certain set of jobs to be scheduled, or a plurality of jobs of the same task are all transmitted, the expected weight and the initial historical weight of each job in the certain set of jobs to be scheduled are updated.
Specifically, if a plurality of jobs of the same task are newly added to a certain job set to be scheduled, or after a plurality of jobs of the same task are all transmitted, task scheduling apparatus 100 updates the expected weight and the initial historical weight of each job in the certain job set to be scheduled.
The embodiment updates the expected weight and the initial historical weight of each job, and can ensure that the information used in the task scheduling process is accurate.
As an alternative embodiment, as shown in fig. 19, step S3300b includes:
step S3310b: and if the target weight of each pre-transmitting operation is the same, determining the target operation according to the expected weight of each pre-transmitting operation.
Specifically, if the target weight of each pre-transmission job is the same, the task scheduling device 100 determines the target job according to the expected weight of each pre-transmission job.
Step S3320b: and if the target weights of the pre-transmitting jobs are different, taking the pre-transmitting job with the maximum target weight as the target job.
Specifically, if the target weights of the pre-transmission jobs are different, the task scheduling device 100 takes the pre-transmission job with the largest target weight as the target job.
The method for selecting the target operation can ensure that the high-weight task can quickly preempt the processor resource and optimize the scheduling result.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (25)

1. A method for task scheduling, the method comprising:
acquiring decomposition information and all task information of a task and state information of a processor;
according to the decomposition information and all task information of each task and the state information of the processor, respectively matching each job of the task with the processor, and adding the job successfully matched with the processor to a job set to be scheduled;
selecting target jobs from the job set to be scheduled according to the target weight of each job in the job set to be scheduled, and obtaining scheduling information, wherein the scheduling information is used for determining the execution sequence of the jobs on a processor; the target weight of each job in the job set to be scheduled is determined according to the following method:
correspondingly obtaining the expected weight of each job in each job set to be scheduled according to the configuration weight of each job in each job set to be scheduled and the total configuration weight of a plurality of jobs in each job set to be scheduled; the configuration weight of each job is the configuration weight of the task to which the job belongs;
obtaining the current historical weight corresponding to each job in each job set to be scheduled according to the expected weight of each job in each job set to be scheduled; the current historical weight is the configuration weight of each job or the historical weight in the last scheduling process;
and determining the target weight of each job in the job set to be scheduled according to the expected weight and the current historical weight of each job in the job set to be scheduled.
2. The method of claim 1, further comprising:
and if more than one job in the task is not successfully matched with the processor within the preset time, acquiring a scheduling failure signal of the task.
3. The method according to claim 1, wherein the step of selecting a target job from the set of jobs to be scheduled according to the target weight of each job in the set of jobs to be scheduled comprises:
determining the scheduling priority of each job according to the target weight of each job in the job set to be scheduled;
and according to the scheduling priority of each job, taking the job with the highest scheduling priority in the job set to be scheduled as the target job.
4. The method according to claim 1, wherein when the number of the job sets to be scheduled is one or more, each of the job sets to be scheduled stores jobs of the same job category, and the step of selecting a target job from the job sets to be scheduled according to a target weight of each job in the job sets to be scheduled further comprises:
taking the operation with the maximum target weight in each operation set to be scheduled as a pre-emission operation of a corresponding operation type;
and determining the target operation according to the target weight of each pre-transmitting operation.
5. The method of claim 1, wherein the step of determining a target weight for each job in each set of jobs to be scheduled based on a current historical weight and desired weights for a plurality of jobs in each set of jobs to be scheduled comprises:
and calculating a weight difference value between the expected weight of each job in each job set to be scheduled and the current historical weight, and obtaining the target weight of each job according to the weight difference value.
6. The method of claim 1, wherein the desired weight of the job is a desired weight of a task to which the job belongs.
7. The method according to claim 1, wherein the step of obtaining the current historical weight corresponding to each job in each set of jobs to be scheduled according to the expected weight of each job in each set of jobs to be scheduled comprises:
determining a delay factor corresponding to each job according to the expected weight of each job in each job set to be scheduled;
and obtaining the current historical weight of the job according to the initial historical weight of each job in each job set to be scheduled and the delay factor corresponding to the job.
8. The method according to claim 7, wherein the step of determining the delay factor corresponding to each of the jobs according to the expected weight of each of the jobs in each of the sets of jobs to be scheduled comprises:
and determining a delay factor corresponding to each job according to the expected weight of each job and a preset mapping relation.
9. The method according to claim 7, wherein the step of obtaining the current history weight of the job according to the initial history weight of each job in each set of jobs to be scheduled and the delay factor corresponding to the job comprises:
if the jobs of a certain task are not selected as target jobs in the current scheduling, taking the ratio of the initial historical weight of each job of the certain task to the delay factor as the adjustment factor of each job, and taking the difference value of the initial historical weight of each job and the corresponding adjustment factor as the current historical weight of each job;
if the job of a certain task is selected as the target job in the scheduling, the ratio of the initial history weight of each job of the certain task to the delay factor is used as a first adjustment factor of each job, the ratio of the maximum value of the delay factor to the delay factor corresponding to each job is used as a second adjustment factor of each job, and the current history weight is obtained by calculation according to the initial history weight of each job, the first adjustment factor and the second adjustment factor.
10. The method of claim 7, further comprising:
and if a plurality of jobs of the same task are newly added in a certain job set to be scheduled, or after the plurality of jobs of the same task are all transmitted, updating the expected weight and the initial historical weight of each job in the certain job set to be scheduled.
11. The method of claim 4, wherein the step of determining the target job based on the target weight of each of the pre-transmitted jobs comprises:
if the target weight of each pre-launching operation is the same, determining the target operation according to the expected weight of each pre-launching operation;
and if the target weights of the pre-transmitting jobs are different, taking the pre-transmitting job with the maximum target weight as the target job.
12. The method of claim 1, further comprising:
and acquiring a processor identifier corresponding to the job successfully matched with the processor, wherein the processor identifier is used for identifying the identity of the processor.
13. A task scheduling apparatus, characterized in that the apparatus comprises:
the first read-write circuit is used for acquiring decomposition information and all task information of the tasks and state information of a processor of the processor;
the matching circuit is used for respectively matching each job of the task with the processor according to the decomposition information and all task information of each task and the state information of the processor, and adding the job successfully matched with the processor to a job set to be scheduled; and
the selection circuit is used for selecting a target job from the job set to be scheduled according to the target weight of each job in the job set to be scheduled to obtain scheduling information, and the scheduling information is used for determining the execution sequence of the jobs on a processor;
wherein the selection circuit comprises an operator for:
correspondingly obtaining the expected weight of each job in each job set to be scheduled according to the configuration weight of each job in each job set to be scheduled and the total configuration weight of a plurality of jobs in each job set to be scheduled; the configuration weight of each job is the configuration weight of the task to which the job belongs;
obtaining the current historical weight corresponding to each job in each job set to be scheduled according to the expected weight of each job in each job set to be scheduled; the current historical weight is the configuration weight of each job or the historical weight in the last scheduling process;
and determining the target weight of each job in the job set to be scheduled according to the expected weight and the current historical weight of each job in the job set to be scheduled.
14. The apparatus of claim 13, wherein the matching circuit is further configured to obtain a scheduling failure signal for the task if more than one of the tasks succeeds in the unmatching within a predetermined time.
15. The apparatus of claim 13, wherein the selection circuit further comprises a selector:
the arithmetic unit is used for determining the scheduling priority of each job according to the target weight of each job in the job set to be scheduled; and
and the selector is used for taking the job with the highest scheduling priority in the job set to be scheduled as the target job according to the scheduling priority of each job.
16. The apparatus according to claim 13, wherein when the number of the sets of jobs to be scheduled is one or more, each of the sets of jobs to be scheduled stores jobs of a same job category, the selection circuit further comprises a selector:
the arithmetic unit is used for taking the job with the maximum target weight in each job set to be scheduled as a pre-emission job of the job category; and
and the selector is used for determining the target operation according to the target weight of each pre-transmitting operation.
17. The apparatus of claim 16, wherein the operator comprises:
and the third operation unit is used for calculating a weight difference value of the expected weight and the current historical weight of each job in each job set to be scheduled, and acquiring the target weight of each job according to the weight difference value.
18. The apparatus of claim 13, wherein the operator further comprises:
the delay subunit is used for determining a delay factor corresponding to each job according to the expected weight of each job; and
and the updating subunit is used for determining the current history weight of the job according to the initial history weight of each job and the delay factor corresponding to the job.
19. The apparatus according to claim 18, wherein the updating subunit is further configured to determine a delay factor corresponding to each job according to the expected weight of each job and a preset mapping relationship.
20. The apparatus of claim 18, wherein the update subunit is further configured to:
if all the jobs of a certain task are not selected as target jobs in the current scheduling, taking the ratio of the initial history weight of each job of the certain task to the delay factor as the adjustment factor of each job, and taking the difference value of the initial history weight of each job and the adjustment factor corresponding to the initial history weight as the current history weight of the job;
if the job of a certain task is selected as the target job in the scheduling, the ratio of the initial history weight of each job of the certain task to the delay factor is used as a first adjustment factor of each job, the ratio of the maximum value of the delay factor to the delay factor corresponding to the job is used as a second adjustment factor of each job, and the current history weight of each job is obtained by calculation according to the initial history weight of each job, the first adjustment factor and the second adjustment factor.
21. The apparatus according to claim 18, wherein the updating subunit is further configured to update the expected weight and the initial historical weight of each job in the set of jobs to be scheduled corresponding to the job category if a plurality of jobs of the same task are newly added to the set of jobs to be scheduled corresponding to the job category or after a plurality of jobs of the same task are all transmitted.
22. The apparatus of claim 15 or 16, wherein the selector is further configured to:
when the target weight of each pre-emission operation is the same, determining the target operation according to the expected weight of each pre-emission operation;
and when the target weight of each pre-transmitting operation is different, taking the pre-transmitting operation with the maximum target weight as the target operation.
23. The apparatus of claim 13, wherein the first read-write circuit is further configured to:
and after the scheduling information is obtained, all task information of the current scheduling task to which the target job belongs is obtained from a task cache device, and the scheduling information and all task information and decomposition information of the current scheduling task of the target job can be transmitted to a second processor.
24. The apparatus of claim 13, wherein the selection circuit is further configured to transmit the scheduling information to the corresponding processor after setting the processor lock signal to a high level according to the scheduling information; setting the corresponding processor lock signal to a low level after completing the transmission of the scheduling information.
25. A task scheduler, characterized in that the task scheduler comprises a task scheduling device according to any of claims 13-24.
CN201811179192.1A 2018-10-10 2018-10-10 Task scheduling method Active CN111026518B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811179192.1A CN111026518B (en) 2018-10-10 2018-10-10 Task scheduling method
PCT/CN2019/110273 WO2020073938A1 (en) 2018-10-10 2019-10-10 Task scheduler, task processing system, and task processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811179192.1A CN111026518B (en) 2018-10-10 2018-10-10 Task scheduling method

Publications (2)

Publication Number Publication Date
CN111026518A CN111026518A (en) 2020-04-17
CN111026518B true CN111026518B (en) 2022-12-02

Family

ID=70191929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811179192.1A Active CN111026518B (en) 2018-10-10 2018-10-10 Task scheduling method

Country Status (1)

Country Link
CN (1) CN111026518B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111026522A (en) * 2018-10-10 2020-04-17 上海寒武纪信息科技有限公司 Task scheduling device, task scheduler, and task processing device
CN111459645B (en) * 2020-04-22 2023-06-30 百度在线网络技术(北京)有限公司 Task scheduling method and device and electronic equipment
CN113760483B (en) * 2020-06-29 2024-10-18 北京沃东天骏信息技术有限公司 Method and device for executing tasks

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5437032A (en) * 1993-11-04 1995-07-25 International Business Machines Corporation Task scheduler for a miltiprocessor system
CN103365708A (en) * 2012-04-06 2013-10-23 阿里巴巴集团控股有限公司 Method and device for scheduling tasks
CN105373429A (en) * 2014-08-20 2016-03-02 腾讯科技(深圳)有限公司 Task scheduling method, device and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5437032A (en) * 1993-11-04 1995-07-25 International Business Machines Corporation Task scheduler for a miltiprocessor system
CN103365708A (en) * 2012-04-06 2013-10-23 阿里巴巴集团控股有限公司 Method and device for scheduling tasks
CN105373429A (en) * 2014-08-20 2016-03-02 腾讯科技(深圳)有限公司 Task scheduling method, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
嵌入式MPSoC系统中的任务调度管理研究;李橙;《中国优秀硕士学位论文全文数据库 信息科技辑》;20110315(第03期);全文 *

Also Published As

Publication number Publication date
CN111026518A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN111026521B (en) Task scheduler, task processing system and task processing method
CN107038069B (en) Dynamic label matching DLMS scheduling method under Hadoop platform
TWI848007B (en) Neural processing unit
US3916383A (en) Multi-processor data processing system
CN111026518B (en) Task scheduling method
US3913070A (en) Multi-processor data processing system
US11609792B2 (en) Maximizing resource utilization of neural network computing system
US8688956B2 (en) Execution engine for executing single assignment programs with affine dependencies
US20070226696A1 (en) System and method for the execution of multithreaded software applications
CN104123304A (en) Data-driven parallel sorting system and method
CN105027075A (en) Processing core having shared front end unit
US11875425B2 (en) Implementing heterogeneous wavefronts on a graphics processing unit (GPU)
CN118277490B (en) Data processing system, data synchronization method, electronic device, and storage medium
CN111026540B (en) Task processing method, task scheduler and task processing device
CN111047045A (en) Distribution system and method for machine learning operation
CN111258655A (en) Fusion calculation method and readable storage medium
CN111026523A (en) Task scheduling control method, task scheduler and task processing device
CN111026522A (en) Task scheduling device, task scheduler, and task processing device
CN111026517B (en) Task decomposition device and task scheduler
CN112463218B (en) Instruction emission control method and circuit, data processing method and circuit
EP3108358A2 (en) Execution engine for executing single assignment programs with affine dependencies
CN111026515B (en) State monitoring device, task scheduler and state monitoring method
CN111026514B (en) Task scheduling method
US20240220794A1 (en) Training systems and operating method thereof
WO2020073938A1 (en) Task scheduler, task processing system, and task processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant