CN117271096A - Scheduling method, electronic device, and computer-readable storage medium - Google Patents
Scheduling method, electronic device, and computer-readable storage medium Download PDFInfo
- Publication number
- CN117271096A CN117271096A CN202311330660.1A CN202311330660A CN117271096A CN 117271096 A CN117271096 A CN 117271096A CN 202311330660 A CN202311330660 A CN 202311330660A CN 117271096 A CN117271096 A CN 117271096A
- Authority
- CN
- China
- Prior art keywords
- task
- weight
- scheduling
- queue
- updated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000004364 calculation method Methods 0.000 claims abstract description 143
- 238000012545 processing Methods 0.000 claims abstract description 62
- 238000012163 sequencing technique Methods 0.000 claims abstract description 8
- 230000000875 corresponding effect Effects 0.000 claims description 83
- 230000006870 function Effects 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 19
- 230000002596 correlated effect Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 5
- 230000006872 improvement Effects 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 238000004088 simulation Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 239000002699 waste material Substances 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 230000001351 cycling effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The present disclosure provides a scheduling method for processing tasks of automatic driving data, including: receiving a newly added first task and determining a value of at least one weight calculation parameter of the first task; calculating to obtain the scheduling weight of the first task according to the value of at least one weight calculation parameter of the first task and a weight score table, and updating the scheduling weight of the second task according to the value of the waiting time length of the second task in the task queue; sequencing the scheduling weight of the first task and the updated scheduling weight of the second task according to the sequence from big to small, and adding the first task into a task queue according to the sequencing result to obtain an updated task queue; and executing the task at the head of the queue in the updated task queue. According to the method and the device, when the newly added task is added into the queue, the scheduling weight of each task is calculated by means of inquiring the weight score table, the queue sequence is updated, the dynamic adjustment of the task scheduling priority is achieved, and the processing efficiency of the automatic driving data is improved.
Description
Technical Field
The present disclosure relates to the field of autopilot, and in particular to a scheduling method, an electronic device, and a computer-readable storage medium.
Background
Along with the rapid development of the automatic driving technology, functions involved in automatic driving are more and more abundant, and the precision requirements on all functions are higher and higher. Autopilot data is a key raw material for autopilot capability growth and iteration, and various types of processing tasks are typically required to be performed on the data in order to evaluate and refine autopilot functions using the data. When the computational power of the resource pool is limited, these processing tasks cannot be performed simultaneously, and queuing is often required in order of commit.
However, the manner in which the queuing order is determined in accordance with the order of submissions may cause problems such as the task of higher priority being submitted later being unable to be executed preferentially, and the task of earlier being submitted taking a lot of execution time to cause other tasks to be unable to be executed later, etc. These problems prevent efficient use of computing resources, resulting in limited data processing efficiency.
Disclosure of Invention
In order to improve a queuing manner of processing tasks and thereby improve processing efficiency of automatic driving data, the disclosure provides a scheduling method and device for processing tasks of automatic driving data, electronic equipment, a computer readable storage medium and a computer program product.
A first aspect of the present disclosure provides a scheduling method for processing tasks of automatic driving data, the scheduling method including: receiving a newly added first task, and determining a value of at least one weight calculation parameter of the first task; calculating to obtain the scheduling weight of the first task according to the value of at least one weight calculation parameter of the first task and a weight score table, and updating the scheduling weight of the second task according to the value of the waiting time length of the second task existing in a task queue and the weight score table, wherein the weight score table prescribes a plurality of weight calculation parameters and score calculation rules of each weight calculation parameter, and the plurality of weight calculation parameters comprise at least one weight calculation parameter and the waiting time length; sequencing the scheduling weight of the first task and the updated scheduling weight of the second task according to the sequence from big to small, and adding the first task into a task queue according to the sequencing result to obtain an updated task queue; and executing the task at the head of the queue in the updated task queue.
In one implementation manner of the first aspect, according to a value of at least one weight calculation parameter of the first task and a weight score table, calculating to obtain a scheduling weight of the first task includes: inquiring a score value calculation rule of at least one weight calculation parameter in a weight score value table to determine a weight score value corresponding to the value of each weight calculation parameter of the first task; and calculating the scheduling weight of the first task according to the weight scores corresponding to the values of the weight calculation parameters of the first task.
In an implementation manner of the first aspect, the at least one weight calculation parameter includes at least one of a manual priority, a data amount, a calculation amount, a task type, a task source, and an affiliated function, wherein a value of the manual priority is positively correlated with a corresponding weight score; the value of the data quantity is inversely related to the corresponding weight score; the value of the calculated amount is inversely related to the corresponding weight score.
In an implementation manner of the first aspect, updating the scheduling weight of the second task according to the value of the waiting duration of the second task existing in the task queue and the weight score table includes: determining the submitting time of a first task, and determining the waiting time of a second task according to the submitting time of the first task, wherein the value of the waiting time of the second task is the time difference between the submitting time of the first task and the submitting time of the second task; and calculating updated scheduling weight of the second task according to the weight score corresponding to the value of at least one weight calculation parameter of the second task and the weight score corresponding to the value of the waiting time length of the second task.
In an implementation manner of the first aspect, updating the scheduling weight of the second task according to the value of the waiting duration of the second task existing in the task queue and the weight score table includes: determining the submitting time of a first task, and determining the waiting time of a second task according to the submitting time of the first task, wherein the waiting time of the second task is the time difference between the submitting time of the first task and the last updated time of the scheduling weight of the second task; and calculating to obtain updated scheduling weights of the second task according to the scheduling weights of the second task updated last time and the weight scores corresponding to the waiting time values of the second task.
In an implementation manner of the first aspect, the scheduling method further includes: adding the newly added weight calculation parameters and the score calculation rules corresponding to the newly added weight calculation parameters into the weight score table, and/or changing the score calculation rules of any weight calculation parameters in the weight score table.
In one implementation manner of the first aspect, executing a task located at a head of a queue in the updated task queue includes: determining a task at the head of a queue in the updated task queue as a task to be processed, and determining a first resource amount required for processing the task to be processed; judging whether the resource pool currently has a first resource amount or not; when the judgment result is yes, corresponding resources are called from the resource pool, and the task to be processed is executed; and if the judgment result is negative, the task to be processed is not executed temporarily, and the resource release is waited.
In an implementation manner of the first aspect, when the determination result is no, the task to be processed is temporarily not executed, and the resource release is waited, including: after waiting for the preset time length, executing the step of judging whether the resource pool currently has the first resource amount or not again; if the judgment result is negative, judging again after waiting for the preset time period until the judgment result is positive; and calling corresponding resources from the resource pool and executing the task to be processed.
In an implementation manner of the first aspect, when the determination result is no, the task to be processed is temporarily not executed, and the resource release is waited, including: after waiting for the preset time length, re-determining that the task at the head of the queue in the current updated task queue is an updated task to be processed, and determining an updated first resource amount required for processing the updated task to be processed; judging whether the resource pool currently has the updated first resource amount or not; when the judgment result is yes, corresponding resources are called from the resource pool, and the updated task to be processed is executed; and if the judgment result is negative, temporarily not executing the updated task to be processed, returning to execute the step of re-determining the task at the head of the queue in the current updated task queue as the updated task to be processed after waiting for the preset time period, and determining the updated first resource amount required by processing the updated task to be processed.
A second aspect of the present disclosure provides a scheduling apparatus for processing tasks of automatic driving data, the scheduling apparatus comprising: the receiving module is used for receiving the newly added first task and determining the value of at least one weight calculation parameter of the first task; the computing module is used for computing to obtain the scheduling weight of the first task according to the value of at least one weight computing parameter of the first task and a weight score table, and updating the scheduling weight of the second task according to the value of the waiting time length of the second task existing in a task queue and the weight score table, wherein the weight score table prescribes a plurality of weight computing parameters and score computing rules of each weight computing parameter, and the plurality of weight computing parameters comprise at least one weight computing parameter and the waiting time length; the ordering module is used for ordering the scheduling weight of the first task and the updated scheduling weight of the second task according to the sequence from big to small, and adding the first task into the task queue according to the ordering result to obtain an updated task queue; and the execution module is used for executing the task at the head of the queue in the updated task queue.
A third aspect of the present disclosure provides an electronic device, comprising: a memory for storing a computer program; a processor for executing a computer program to implement the scheduling method as provided in the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to implement the scheduling method as provided in the first aspect of the present disclosure.
A fifth aspect of the present disclosure provides a computer program product comprising instructions which, when executed by a processor, cause the processor to implement a scheduling method as provided in the first aspect of the present disclosure.
According to the scheduling method and device for processing tasks of automatic driving data, electronic equipment, computer readable storage medium and computer program product, when a new task is submitted, the scheduling weight calculation of the new task and the scheduling weight update of the existing task are triggered, so that the scheduling weights of all tasks in a queue do not need to be recalculated and the queue order is determined again when each scheduling action is executed, a large amount of calculation resources are saved, and the processing efficiency of the automatic driving data is effectively improved. In addition, the scheduling weight is calculated by means of inquiring the weight score table, frame type calculation logic easy to adjust is provided, a calculation program is not required to be manually modified, and a large number of complex procedures and improvement of labor cost are avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a schematic diagram of a scheduling system according to an embodiment of the disclosure.
Fig. 2 is a flow chart illustrating a scheduling method according to an embodiment of the disclosure.
Fig. 3 is a flowchart illustrating determining a scheduling weight of a first task according to an embodiment of the disclosure.
Fig. 4 is a flowchart illustrating updating of the scheduling weight of the second task according to an embodiment of the disclosure.
Fig. 5 is a flowchart illustrating updating of the scheduling weight of the second task according to another embodiment of the disclosure.
FIG. 6 is a schematic diagram of task submission time in an embodiment of the disclosure.
FIG. 7 is a flow chart illustrating a task execution process according to an embodiment of the present disclosure.
FIG. 8 is a flow chart illustrating a task execution in another embodiment of the present disclosure.
Fig. 9 is a flowchart illustrating a scheduling method according to another embodiment of the disclosure.
Fig. 10 is a schematic structural diagram of a scheduling apparatus according to an embodiment of the disclosure.
Fig. 11 is a schematic diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
The technical solutions provided by the present disclosure will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present disclosure. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present disclosure. All other embodiments obtained by one of ordinary skill in the art based on the embodiments in this disclosure are within the scope of the present disclosure.
Autopilot technology is evolving rapidly through continual renewal and improvement. To implement more autopilot functions and upgrade each function, a large amount of autopilot data (such as camera data, lidar data, map data, vehicle bus data, etc.) is generally collected, and by processing these data, an improvement link of the autopilot function is fed back. The data not only comprises the data acquired by the actual road, but also comprises the data generated based on the scene simulation system and the like, and the data volume is quite huge, so that the data processing tasks are more.
To perform processing tasks for autopilot data, developers typically build a resource pool with limited computing power based on such factors as computational effort and cost of computing resources. When different data processing tasks are submitted, the resource pool can be utilized to perform the tasks. However, since the computational power of the resource pool is limited, there is a queuing phenomenon when a plurality of data processing tasks wait to be executed.
In order to improve the processing efficiency of the data, a priority strategy can be set for the queuing process, and the scheduling weights of different processing tasks are determined and ordered through the priority strategy, so that the high-priority task is placed at the head of the queue to be executed preferentially. In the related art, the priority policy mainly includes a first-in first-out scheme that ranks in a commit order and a manual maintenance scheme that ranks according to a manually specified priority. However, the fifo scheme only considers fairness in time, ignores other factors affecting importance of the task, and may cause important or urgent tasks to be unable to be executed at a late time; the manual maintenance scheme is weak in automation capacity, high in labor cost and poor in feasibility due to dependence on manual operation.
In view of the foregoing, embodiments of the present disclosure provide a scheduling method applicable to processing tasks of automatic driving data. For easy understanding, an exemplary application scenario of the scheduling method provided by the embodiment of the present disclosure is described below with reference to fig. 1.
Fig. 1 is a schematic diagram of a scheduling system 100 according to an embodiment of the disclosure. As shown in fig. 1, the scheduling system 100 may include a vehicle terminal 110, a server 120, a task generating device 130, a priority determiner 140, a processor 150, and a resource pool 160.
The vehicle terminal 110 may be a terminal device installed in an autonomous mobile device such as an automobile, and has an automatic driving function, and may continuously collect real-time data through a camera, a laser radar, and other devices, and may also collect real-time status data of the vehicle itself through a vehicle-mounted monitoring system. Alternatively, the vehicle terminal 110 may be a virtual module in a scene simulation system, and may generate various experimental data simulating an actual automatic driving scene from input data or process data.
The vehicle terminal 110 and the server 120 may be communicatively connected, and the collected or generated autopilot data may be transmitted to the server 120, so that a developer may use the autopilot data to perform various processing tasks such as model training, function debugging, and the like.
The server 120 may be configured to store the autopilot data transmitted by the vehicle terminal 110 and provide the autopilot data to the communicatively coupled task generating device 130. Optionally, the server 120 may also perform basic processing on the autopilot data, such as classification, labeling, etc.
The task generating device 130 can be communicatively coupled to the server 120 to obtain the autopilot data from the server 120. The task generating device 130 may include one or more electronic devices, which may be, for example, electronic devices used by a developer for the developer to establish processing tasks based on autopilot data; alternatively, the task generating device 130 may automatically generate the processing task of the automatic driving data according to the preset task generating logic. Here, the processing tasks of the autopilot data may include, for example: simulation test tasks, model training tasks, model evaluation tasks, algorithm training tasks, algorithm evaluation tasks, and the like.
The processor 150 is capable of receiving processing tasks submitted by the task generating device 130 and invoking resources from the resource pool 160 to perform the processing tasks. The processing tasks submitted by the task generating device 130 may be submitted to the priority decider 140 before reaching the processor 150. The priority determiner 140 may determine the scheduling weights of different processing tasks and order the processing tasks according to a preset priority determination policy, thereby placing the high-priority task at the head of the queue. The processor 150 sequentially performs resource scheduling and execution on each processing task in the order from the head of the queue to the tail of the queue, whereby the top-of-queue high-priority task can be preferentially executed.
It should be understood that the vehicle terminal 110, the server 120, the task generating device 130, the priority determiner 140, the processor 150, and the resource pool 160 in the scheduling system 100 shown in fig. 1 are merely illustrative, and the number may be one or more, and may be designed according to actual needs. The server 120, the task generating device 130, the priority determiner 140, the processor 150, and the resource pool 160 may be integrated in the same device, or may be provided in different devices separately or in groups, and the specific implementation form of the scheduling system 100 is not limited in this disclosure.
In response to the foregoing, in some aspects, a priority policy based on a variety of influencing factors may be set using the priority determiner 140 in the scheduling system 100. When tasks need to be scheduled from the queue, the parameter values of each task in the queue corresponding to each influencing factor can be confirmed at first, calculation is carried out according to a preset calculation rule based on the parameter values, the scheduling weights of all the tasks in the queue are obtained, and then the task with the largest scheduling weight is selected for scheduling. By adopting the scheme, different influencing factors can be comprehensively considered when the tasks are scheduled, and more comprehensive scheduling weights of the tasks are obtained, so that the data processing efficiency is greatly improved.
However, in this scheme, the scheduling weights of all tasks in the queue need to be recalculated each time scheduling is performed. If the task in the queue does not change relative to the last scheduling, the ordering in the queue will not change after recalculation, and this calculation results in wasted computing resources and time. Moreover, most tasks in the queue may only change in waiting time relative to the last scheduling, while other influencing factors do not change, and if scheduling weights of all tasks are recalculated based on all influencing factors, the waste problem is further aggravated.
In view of the foregoing, embodiments of the present disclosure provide a scheduling method, which aims to solve the foregoing problems.
Fig. 2 is a flow chart illustrating a scheduling method according to an embodiment of the disclosure. The scheduling method may be performed, for example, by the priority determiner 140 in the scheduling system 100 shown in fig. 1. As shown in fig. 2, the method may include the following steps S210 to S250.
Step S210: the newly added first task is received and a value of at least one weight calculation parameter of the first task is determined.
Specifically, the newly added first task is a newly added data processing task that is currently submitted, which requires the resource pool 160 to allocate computing resources for it to execute the task. The first task may be a separate task or may be composed of a plurality of sub-tasks. When the first task is received, a value of the first task corresponding to each of the at least one weight calculation parameter may be determined according to attribute information of the first task.
Wherein the weight calculation parameter may be a parameter capable of reflecting the relevant characteristics of the data processing tasks, in the present disclosure the weight calculation parameter is set as an influencing factor for determining the priority weight of each data processing task.
It can be appreciated that the weight calculation parameters can be set and modified according to actual requirements, and the embodiment of the disclosure is not limited to the specific content and the setting logic of the weight calculation parameters.
Step S220: and calculating the scheduling weight of the first task according to the value of at least one weight calculation parameter of the first task and the weight score table.
Wherein, the weight score table prescribes a plurality of weight calculation parameters and score calculation rules of each weight calculation parameter.
Specifically, after the first task is received, attribute information of the first task may be compared with a plurality of weight calculation parameters specified in a weight score table, and when the first task is determined to have at least one weight calculation parameter corresponding to the plurality of weight calculation parameters, the weight score table is further queried in the weight score table to determine a score corresponding to a value of the first task corresponding to the at least one weight calculation parameter, and a scheduling weight of the first task is obtained according to the score calculation.
Illustratively, in one embodiment, as shown in FIG. 3, step S220 may be performed as follows steps S310-S320.
Step S310: and inquiring a score value calculation rule of at least one weight calculation parameter in the weight score value table to determine a weight score value corresponding to the value of each weight calculation parameter of the first task.
In a specific example, the weight calculation parameters may include at least one of a human priority, a data amount, a calculation amount, a task type, a task source, and an affiliated function. Accordingly, the weight score table defines score calculation rules corresponding to the weight calculation parameters.
For example, in an exemplary application scenario, a weight score table as shown in table 1 may be set for the processing task of the autopilot data.
Table 1:
it should be understood that the setting of the calculation parameter scores for the weights in the above table may be based on actual service requirements. For example, task priority execution of a mass production project of an automatic driving automobile, task priority execution of a high priority function, task priority execution of a type having a complicated operation process, task priority execution of a manually designated high priority, task priority execution of a small calculation amount, and the like are supported.
Wherein, the manual priority is the priority of the task which is manually pre-assigned. The value of the manual priority is positively correlated with the corresponding weight score, i.e., the greater the value of the manual priority, the higher its corresponding weight score. As shown in the table above, the values of the manual priorities may include high, medium, and low, and the scores of the corresponding weight calculation parameters are 1000, 500, and 0, respectively.
For example, when a newly added first task needs to be urgently processed, the task generating device that submitted the task may set the value of the manual priority of the task to "high" when the newly added task obtains the highest scheduling weight score 1000. Thus, even if other parameters of the task do not dominate, the task is likely to override other tasks and be queued to the head of the queue through manual priority adjustment. For another example, when the newly added first task is a non-urgent task, the priority of the task may not need to be manually interfered, and at this time, the manual priority of the task may be set to a default value of "low", so that the weight score corresponding to the task is 0.
Therefore, by introducing the weight calculation parameter of the manual priority, the task scheduling weight can be automatically and dynamically adjusted, and the priority execution of the high-priority task can be ensured by the manual intervention, so that the flexibility of task scheduling is improved. It will be appreciated that in other examples, the manual priority may also be set to more levels, such as levels 1-5, etc., or may also indicate that the manual user directly inputs the score.
Alternatively, the value of the data amount is inversely related to the corresponding weight score, i.e. the larger the value of the data amount, the lower its corresponding weight score. It can be understood that the larger the data volume, the longer the processing time is required, and in order to avoid that important tasks cannot be processed later, the task relatively delay processing with the larger data volume can be set, so as to improve the flexibility of adjusting the queue sequence.
Alternatively, the value of the calculated amount is inversely related to the corresponding weight score, i.e. the larger the calculated amount, the lower its corresponding weight score. Here, the calculation amount can be calculated based on the task type and the data amount. It can be understood that the calculated amount is positively correlated with the processing time of the task, so that in order to avoid that important tasks cannot be processed later, the task relatively delay processing with larger calculated amount can be set, thereby improving the adjustment flexibility of the queue sequence.
By introducing two inversely related weight calculation parameters of data quantity and calculation quantity, the priority of the data processing task occupying more operation resources or occupying longer operation time can be properly reduced, thereby avoiding the blocking phenomenon of the task queue to a certain extent.
It should be understood that the weight calculation parameters, the values of the weight calculation parameters, and the scores or calculation rules of the weight calculation parameters in the above table may be modified according to practical situations, which is not limited in this disclosure.
Step S320: and calculating the scheduling weight of the first task according to the weight scores corresponding to the values of the weight calculation parameters of the first task.
After the weight scores corresponding to the parameters are determined, calculation can be performed according to preset calculation logic to obtain the scheduling weight of the first task. It should be appreciated that the computing logic may be designed according to actual needs, and embodiments of the present disclosure are not limited in this regard.
As a specific example, in combination with the weight score table shown in table 1, the preset calculation logic may be set as the following formula (1).
w=(p+q+m)*(c+t)+h (1)
Wherein w represents the scheduling weight of the first task, p represents the weight score corresponding to the function to which the scheduling weight belongs, q represents the weight score corresponding to the source of the task, m represents the weight score corresponding to the type of the task, c represents the weight score corresponding to the calculated amount, h represents the weight score corresponding to the manual priority, and t represents the weight score corresponding to the queuing time of the task. It should be noted that, since the first task is a newly added task that is currently submitted, the value of t is 0 for the first task.
The formula reflects the comprehensive contribution condition of each weight calculation parameter to the first task scheduling weight, not only considers the priority of the task, but also considers the artificial priority, finally determines a reasonable scheduling weight together, and the larger the numerical value of the scheduling weight is, the higher the queuing priority of the task is.
With continued reference to fig. 2, after receiving the newly added first task in step S210, the method not only calculates the scheduling weight of the newly added first task (i.e. step S220), but also updates the scheduling weight of the task existing in the queue. Specifically, the updating manner of the scheduling weight of the task existing in the queue may include step S230.
Step S230: and updating the scheduling weight of the second task according to the value of the waiting time length of the second task and the weight score table in the task queue.
Here, the second task existing in the queue may include one or more tasks. When there are a plurality of second tasks in the queue, S230 is performed for each second task. Further, as shown in table 1, the plurality of weight calculation parameters in the weight score table further includes a waiting time period.
It will be appreciated that the values of all weight calculation parameters have not changed except for the waiting period for the second task from the time when the most recent second task was submitted to the current time (i.e., the time when the first task was submitted). And when the most recent second tasks are submitted, all second tasks already have corresponding scheduling weights. Wherein the scheduling weights of the most recent second tasks are calculated at commit time, and the scheduling weights of the other second tasks (if any) are updated at commit time.
Based on this, when the newly added first task is received, the second task in the queue does not need to be completely recalculated with the scheduling weight, but the current scheduling weight of the second task can be obtained only by updating according to the current waiting time based on the scheduling weight existing before.
As a specific embodiment, as shown in fig. 4, step S230 may include the steps of:
step S410: and determining the submitting time of the first task, and determining the waiting time of the second task according to the submitting time of the first task.
The value of the waiting time of the second task is the time difference between the submitting time of the first task and the submitting time of the second task. For ease of understanding, let us assume that the time of submission of the second task is t2, the time of submission of the first task is t1, and the waiting time of the second task is Δt (in minutes), Δt=t1-t 2, as shown in fig. 6.
Step S420: and calculating updated scheduling weight of the second task according to the weight score corresponding to the value of at least one weight calculation parameter of the second task and the weight score corresponding to the value of the waiting time length of the second task.
Illustratively, in combination with equation (1) in the above example, the updated scheduling weight of any second task may be calculated according to equation (2) below.
w=w0+(p+q+m)*(△t) (2)
Wherein w0 is the scheduling weight calculated when the second task is submitted (i.e. the second task is the newly added first task), and w0= (p+q+m) ×c) +h is known by combining formula (1).
By adopting the method, the updated scheduling weight of the second task can be obtained through less calculation amount, so that a large number of repeated calculations are avoided, and the data processing efficiency is improved.
It will be appreciated that according to the weight score table shown in table 1, t is set to a corresponding score of 1 score every 1 minute waiting. Since Δt is in minutes, the absolute value of Δt is the weight score corresponding to the value of the waiting time period of the second task. The other parameters in the formula (2) represent the same meanings as those in the formula (1), and are not described here again.
As another specific embodiment, as shown in fig. 5, step S230 may also include the following steps:
step S510: and determining the submitting time of the first task, and determining the waiting time of the second task according to the submitting time of the first task.
The value of the waiting time of the second task is the time difference between the submitting time of the first task and the time when the scheduling weight of the second task is updated last time.
The value of the waiting time of the second task is the time difference between the submitting time of the first task and the time when the scheduling weight of the second task is updated last time. For easy understanding, as shown in fig. 6, assuming that the time when the scheduling weight of the second task is updated last is t3, the time when the first task is submitted is t1, and the waiting time of the second task is Δt '(in minutes), Δt' =t1-t 3.
Step S520: and calculating to obtain updated scheduling weights of the second task according to the scheduling weights of the second task updated last time and the weight scores corresponding to the waiting time values of the second task.
Illustratively, in combination with equation (1) in the above example, the updated scheduling weight of any second task may be calculated according to equation (3) below.
w=w1+(p+q+m)*△t’ (3)
Wherein w1 is a scheduling weight after the last update of the second task, and in combination with formula (1), w1= (p+q+m)/(c+t) +h, where t is a score corresponding to a time period from a time point of the second task to a time point of the first task (i.e., the latest time point of the second task) when the second task is submitted (i.e., the time point when the scheduling weight of the second task is updated last time), Δt' is a weight score corresponding to a value of a waiting time period of the second task, and meaning represented by other parameters in formula (3) is the same as that in formula (1) and is not described herein.
By adopting the method, the updated scheduling weight of the second task can be obtained through less calculation amount, so that a large number of repeated calculations are avoided, and the data processing efficiency is improved.
It should be noted that, the foregoing steps S220 and S230 may be executed simultaneously, or the steps S220 and S230 may be executed first, or vice versa.
Step S240: and sequencing the scheduling weight of the first task and the updated scheduling weight of the second task according to the sequence from big to small, and adding the first task into the task queue according to the sequencing result to obtain an updated task queue.
Specifically, if the original task queue is empty, after the first task joins the original task queue, the updated task queue has and only has the first task; if the original task queue is not empty, the second task is one or more tasks in the original queue, after the first task is added, step S240 sorts the scheduling weight of the first task and the updated scheduling weight of the second task according to the order from big to small, and adds the first task into the task queue according to the sorting result, the bigger the scheduling weight, the earlier the queuing position, the higher the priority.
That is, after the first task is added to the queue, the scheduling weight of the task arranged before the first task is greater than or equal to the scheduling weight of the first task, and the scheduling weight of the task arranged after the first task is less than or equal to the scheduling weight of the first task. For the case that the scheduling weights are equal, the ordering logic may be additionally set on the basis of the scheduling method provided in the embodiment of the present disclosure, which is not limited in the embodiment of the present disclosure.
Step S250: and executing the task at the head of the queue in the updated task queue.
The task at the head of the queue in the updated task queue may be the first task or the second task. Illustratively, the first-located tasks may be performed using the processor 150 in the dispatch system 100 shown in FIG. 1.
According to the scheduling method for the processing tasks of the automatic driving data, when the newly added task is submitted, the scheduling weight calculation of the newly added task and the scheduling weight update of the existing task are triggered, so that the scheduling weights of all the tasks in the queue do not need to be recalculated and the queue order is redetermined when each scheduling action is executed, a large amount of calculation resources are saved, and the processing efficiency of the automatic driving data is effectively improved. In addition, the scheduling weight is calculated by means of inquiring the weight score table, frame type calculation logic easy to adjust is provided, a calculation program is not required to be manually modified, and a large number of complex procedures and improvement of labor cost are avoided.
Further, in an exemplary embodiment, as shown in fig. 7, step S250 may include the steps of:
step S710: and determining the task at the head of the queue in the updated task queue as a task to be processed, and determining a first resource amount required for processing the task to be processed.
It should be noted that, although the priority of the task to be processed in the current task queue is highest, the size of the computing resources required by different tasks to be processed is different, if the task to be processed is directly executed, the task to be processed may fail due to insufficient computing resources, and even serious problems occur, such as system hang-up or restart. Thus, as an embodiment, the scheduling method provided in the present disclosure calculates the current free resource amount in the resource pool, and compares the free resource amount with the first resource amount, as shown in step 720:
step S720: and judging whether the resource pool currently has the first resource amount or not.
When the determination result is yes, that is, when the amount of currently free resources in the resource pool is greater than the first amount of resources required for the task to be processed, the following step S730 may be performed.
Step S730: and calling corresponding resources from the resource pool and executing the task to be processed.
At this time, the processor may call the corresponding resource from the resource pool and execute the task to be processed.
On the contrary, when the determination result of step S720 is no, that is, when the amount of currently free resources in the resource pool is smaller than the first amount of resources required for the task to be processed, the following step S740 may be performed.
Step S740: and temporarily not executing the task to be processed and waiting for the release of the resource.
Alternatively, a preset time period may be set while waiting for the resource pool to release a sufficient amount of resources. After waiting for the preset time period, executing step S720 again to re-determine whether the resource pool currently has the amount of resources required for processing the task to be processed, and if yes, executing step S730 directly to process the task to be processed; when the determination result is no, step S740 is performed to continue waiting for the release of resources. And repeatedly cycling until the task to be processed is processed normally.
It can be understood that the value of the preset duration can be set according to actual conditions and experience, if the preset duration is too long, resource waste can be caused because the amount of the spare resources in the resource pool cannot be judged in time, so that the task scheduling efficiency is reduced; if the preset time period is too short, the frequency of calculating the current idle resource amount in the resource pool is increased, and the step of calculating the current idle resource amount consumes a certain resource amount, thereby causing resource waste.
In another exemplary embodiment, the present disclosure further provides another implementation of step S250, as shown in fig. 8, in order to ensure that high priority tasks are always performed first in this case, considering that new tasks may be added to the queue during waiting for resource release, resulting in a possible change in scheduling weight ordering in the queue.
As shown in fig. 8, when the waiting resource pool releases a sufficient amount of resources, after waiting for a preset period of time, step S710 is executed again to redetermine the task at the head of the queue in the task queue after the current update as a task to be processed, and determine the first amount of resources required for processing the task to be processed. Step S720 is then performed to determine whether the first amount of resources is currently available in the resource pool. When the determination result is yes, executing step S730 to execute the task to be processed; and when the judgment result is yes, executing the step S740 to continue waiting for resource release, and repeating the steps until the task to be processed is processed normally.
It should be noted that, the difference between fig. 8 and fig. 7 is only that the steps executed in the loop in the process of waiting for the resource release are different, and other steps similar to those in fig. 7 are referred to the related description in fig. 7, and are not repeated here.
For ease of understanding, the following describes, by way of example, a scheduling method provided in the present disclosure:
assuming that the first task newly added in the queuing queue is task C and the second task existing in the task queue is task A and task B respectively in 18:10:00 minutes of 20 days of 7 months of 2023, wherein the submitting time of task A is 18:00:00 minutes of 20 days of 7 months of 2023, and the submitting time of task B is 18:05:00 minutes of 20 days of 7 months of 2023.
The values of the weight calculation parameters of task a and the corresponding scores are shown in the following table:
weight calculation parameters | Value of weight calculation parameter | Weight calculation parameter score |
Manual priority | Low and low | 0 |
Data volume | 500GB | 2 |
Task type | Training | 1 |
Calculated amount | N/A | 2 |
Task source | Non-mass production project | 1 |
The function of | Sentinel (vehicle infringement alarm) | 1 |
Waiting duration | For 10 minutes | 10 |
The calculation process of the weight score corresponding to the data volume of the task A comprises the following steps: 1 (1 TB/data amount) =1 (1 TB/500 GB) =2, and the calculated weight score corresponding to the calculated amount is: task type score data volume score = 1*2 = 2.
The values of the weight calculation parameters of task B and their corresponding scores are shown in the following table:
the calculation process of the weight score corresponding to the data volume of the task B is as follows: 1 (1 TB/data amount) =1 (1 TB/100 GB) =10, and the calculated weight score corresponding to the calculated amount is: task type score data volume score = 2 x 10 = 20.
The values of the weight calculation parameters of task C and the corresponding scores are shown in the following table:
weight calculation parameters | Value of weight calculation parameter | Weight calculation parameter score |
Manual priority | High height | 1000 |
Data volume | 1TB | 1 |
Task type | Training | 1 |
Calculated amount | N/A | 1 |
Task source | Non-mass production project | 1 |
The function of | AEB (Emergency brake) | 2 |
Waiting duration | 0 min | 0 |
The calculation process of the weight score corresponding to the data volume of the task C comprises the following steps: 1 (1 TB/data amount) =1 (1 TB/1 TB) =1, and the calculated weight score corresponding to the calculated amount is: task type score data volume score = 1*1 = 1.
According to the weight score table and the formula (1), the scheduling weight of the task A, B, C can be obtained through calculation as follows:
therefore, after the task C is newly added in the queuing queue, the tasks in the queue are arranged according to the order of the scheduling weights from big to small, and the queuing order is C, B, A in sequence, wherein C is the head of the queue. Task C, although incoming the latest, is ranked at the head of the team due to the manual highest priority. Since task B serves mass production and the function to which ANP (automatic pilot) belongs has the highest priority, task B is scheduled more than task a and is scheduled in front of task a.
Assuming that the task D is newly added in 18:20:00 minutes on 7/20/2023, the newly added first task in the queuing queue is the task D, and the existing second tasks in the task queue are the task A, the task B and the task C respectively, wherein the values of all weight calculation parameters of the task D and the corresponding scores thereof are shown in the following table:
Weight calculation parameters | Value of weight calculation parameter | Weight calculation parameter score |
Manual priority | Low and low | 0 |
Data volume | 200GB | 5 |
Task type | Simulation of | 4 |
Calculated amount | N/A | 5 |
Task source | Non-mass production project | 1 |
The function of | Sentinel (vehicle infringement alarm) | 1 |
Waiting duration | 0 min | 0 |
The calculation process of the weight score corresponding to the data size of the task D comprises the following steps: 1 (1 TB/data amount) =1 (1 TB/200 GB) =5, and the calculated weight score corresponding to the calculated amount is: task type score data volume score = 1*5 = 5.
According to the weight score table and the formula (2), after the task D is newly added, the scheduling weight of the task D and the updated scheduling weight of the task A, B, C are respectively shown in the following table:
when task D is submitted, it is queued after task a because the queuing time is 0, although both the task type and the data size are dominant. Therefore, after the task D is newly added in the queuing queue, the tasks in the queue are arranged according to the order of the scheduling weights from big to small, and the queuing order is C, B, A, D in sequence, wherein C is positioned at the head of the queue.
Optionally, in a further embodiment, the scheduling method provided by the present disclosure may further have flexibility. Specifically, on the basis of the embodiment shown in fig. 2, fig. 9 is a schematic flow chart of a scheduling method according to another embodiment of the disclosure. It should be understood that fig. 9 differs from fig. 2 only by the addition of the following steps:
S910: adding the newly added weight calculation parameters and the score calculation rules corresponding to the newly added weight calculation parameters into the weight score table, and/or changing the score calculation rules of any weight calculation parameters in the weight score table.
It should be noted that the execution order relationship between the step S910 and the steps S210 to S250 in fig. 2 is not fixed, and is not limited to the order shown in fig. 9, for example, the step S910 may be executed before the step S210 in fig. 2, or may be executed simultaneously or cross-executed with any step S210 to S250, and the specific execution order may be appropriately adjusted according to the actual computing resources and requirements.
Based on the scheduling method provided by the embodiment, the weight score table can be flexibly adjusted according to the condition of each weight parameter weighting strategy to be adjusted, so that the tedious operation of manually modifying the calculation program is avoided, the modification efficiency of the calculation rule is effectively improved, the workload is reduced, and the increase of the labor cost is restrained while the data processing efficiency is improved.
In summary, according to the scheduling method provided by the present disclosure, when a newly added task is submitted, the calculation of the scheduling weight of the newly added task and the update of the scheduling weight of the existing task are triggered, so that it is not necessary to recalculate the scheduling weights of all the tasks in the queue and to redetermine the queue order each time the scheduling action is performed, a large amount of computing resources are saved, and the processing efficiency of the autopilot data is effectively improved. In addition, the scheduling weight is calculated by means of inquiring the weight score table, frame type calculation logic easy to adjust is provided, a calculation program is not required to be manually modified, and a large number of complex procedures and improvement of labor cost are avoided.
Fig. 10 is a schematic structural diagram of a scheduling apparatus 1000 according to an embodiment of the disclosure. The scheduling apparatus 1000 may be provided in the priority determiner 140 in the scheduling system 100 shown in fig. 1, for example.
As shown in fig. 10, the scheduler 1000 may include a receiving module 1010, a computing module 1020, a sorting module 1030, and an executing module 1040.
The receiving module 1010 is configured to receive the newly added first task and determine a value of at least one weight calculation parameter of the first task; the calculation module 1020 is configured to calculate a scheduling weight of the first task according to a value of at least one weight calculation parameter of the first task and a weight score table, and update the scheduling weight of the second task according to a value of a waiting time length of an existing second task in the task queue and the weight score table, where the weight score table specifies a plurality of weight calculation parameters and a score calculation rule of each weight calculation parameter, and the plurality of weight calculation parameters include at least one weight calculation parameter and the waiting time length; the sorting module 1030 is configured to sort the scheduling weight of the first task and the updated scheduling weight of the second task in order from big to small, and add the first task to the task queue according to the sorting result, so as to obtain an updated task queue; the execution module 1040 is configured to execute a task located at the head of the queue in the updated task queue.
In some embodiments, the computing module 1020 may be specifically configured to: inquiring a score value calculation rule of at least one weight calculation parameter in a weight score value table to determine a weight score value corresponding to the value of each weight calculation parameter of the first task; and calculating the scheduling weight of the first task according to the weight scores corresponding to the values of the weight calculation parameters of the first task.
Optionally, in some embodiments, the at least one weight calculation parameter may include at least one of a human priority, an amount of data, an amount of calculation, a task type, a task source, and an attributed function. Wherein, the value of the manual priority is positively correlated with the corresponding weight score; the value of the data quantity is inversely related to the corresponding weight score; the value of the calculated amount is inversely related to the corresponding weight score.
In some embodiments, the computing module 1020 may be specifically configured to: determining the submitting time of a first task, and determining the waiting time of a second task according to the submitting time of the first task, wherein the value of the waiting time of the second task is the time difference between the submitting time of the first task and the submitting time of the second task; and calculating updated scheduling weight of the second task according to the weight score corresponding to the value of at least one weight calculation parameter of the second task and the weight score corresponding to the value of the waiting time length of the second task.
In other embodiments, the computing module 1020 may be specifically configured to: determining the submitting time of a first task, and determining the waiting time of a second task according to the submitting time of the first task, wherein the waiting time of the second task is the time difference between the submitting time of the first task and the last updated time of the scheduling weight of the second task; and calculating to obtain updated scheduling weights of the second task according to the scheduling weights of the second task updated last time and the weight scores corresponding to the waiting time values of the second task.
In some embodiments, the scheduling apparatus 1000 may further include a modification module, wherein the modification module may be configured to: adding the newly added weight calculation parameters and the score calculation rules corresponding to the newly added weight calculation parameters into the weight score table, and/or changing the score calculation rules of any weight calculation parameters in the weight score table.
In some embodiments, the execution module 1040 may be specifically configured to: determining a task at the head of a queue in the updated task queue as a task to be processed, and determining a first resource amount required for processing the task to be processed; judging whether the resource pool currently has a first resource amount or not; when the judgment result is yes, corresponding resources are called from the resource pool, and the task to be processed is executed; and if the judgment result is negative, the task to be processed is not executed temporarily, and the resource release is waited.
In some embodiments, when the determination result is no, the execution module 1040 may temporarily not execute the task to be processed, wait for the release of the resource, and execute the step of determining whether the resource pool currently has the first resource amount again after waiting for the preset duration. If the determination result is no, the execution module 1040 may determine again after waiting for the preset time period, until the determination result is yes, call the corresponding resource from the resource pool, and execute the task to be processed.
In other embodiments, when the determination is negative, the execution module 1040 may temporarily not execute the task to be processed and wait for the release of the resource. After waiting for the preset duration, the execution module 1040 may perform the following operations: re-determining that a task at the head of a queue in the current updated task queue is an updated task to be processed, and determining an updated first resource amount required for processing the updated task to be processed; judging whether the resource pool currently has the updated first resource amount or not; when the judgment result is yes, corresponding resources are called from the resource pool, and the updated task to be processed is executed; and if the judgment result is negative, temporarily not executing the updated task to be processed, returning to execute the step of re-determining the task at the head of the queue in the current updated task queue as the updated task to be processed after waiting for the preset time period, and determining the updated first resource amount required by processing the updated task to be processed.
It should be understood that, for specific operation procedures and functions of the receiving module 1010, the calculating module 1020, the sorting module 1030, the executing module 1040 and the like in the scheduling apparatus 1000, reference may be made to the descriptions in the scheduling method and system provided in any embodiment of fig. 1 to fig. 9, and based on the operations and functions of these modules, the scheduling apparatus 1000 can implement the corresponding technical effects implemented by the scheduling method and system provided in any embodiment of fig. 1 to fig. 9, which are not repeated herein.
Fig. 11 is a schematic diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 11, the electronic device 1100 may include a memory 1110 and a processor 1120, wherein the memory 1110 is used to store a computer program; the processor 1120 is configured to execute the computer program to implement the scheduling method or steps provided by any of the embodiments of fig. 1-9.
The disclosed embodiments also provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor, causes the processor to implement the scheduling method provided in any of the foregoing embodiments.
The disclosed embodiments also provide a computer program product comprising instructions that, when executed by a processor, cause the processor to implement the scheduling method provided by any of the foregoing embodiments.
The scheduling methods in the present disclosure may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in this disclosure are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user device, a core network device, an OAM, or other programmable apparatus.
The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; but also optical media such as digital video discs; but also semiconductor media such as solid state disks. The computer readable storage medium may be volatile or nonvolatile storage medium, or may include both volatile and nonvolatile types of storage medium.
It will be appreciated that the specific examples of the disclosure are provided solely to aid in the understanding of the disclosed embodiments by those skilled in the art and do not limit the scope of the invention.
It will be understood that, in various embodiments of the disclosure, the sequence number of each process does not mean the order of execution, and the order of execution of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation of the embodiments of the disclosure.
It is to be understood that the various embodiments described in this disclosure may be implemented either alone or in combination, and that the disclosed embodiments are not limited in this regard.
Unless defined otherwise, all technical and scientific terms used in the embodiments of the present disclosure have the same meaning as commonly understood by one of ordinary skill in the art to which the present disclosure pertains. The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present disclosure. The term "and/or" as used in this disclosure includes any and all combinations of one or more of the associated listed items. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be appreciated that the processor of embodiments of the present disclosure may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a Digital signal processor (Digital SignalProcessor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks of the disclosure in the embodiments of the disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It will be appreciated that the memory in embodiments of the disclosure may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable Programmable ROM (EPROM), an Electrically Erasable Programmable ROM (EEPROM), or a flash memory, among others. The volatile memory may be Random Access Memory (RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solutions, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM), a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of specific embodiments of the present disclosure, and the scope of the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present disclosure. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (11)
1. A scheduling method for processing tasks of automatic driving data, comprising:
receiving a newly added first task, and determining a value of at least one weight calculation parameter of the first task;
calculating to obtain a scheduling weight of the first task according to a value of at least one weight calculation parameter of the first task and a weight score table, and updating the scheduling weight of the second task according to a value of a waiting time length of the second task existing in a task queue and the weight score table, wherein the weight score table prescribes a plurality of weight calculation parameters and a score calculation rule of each weight calculation parameter, and the plurality of weight calculation parameters comprise the at least one weight calculation parameter and the waiting time length;
sequencing the scheduling weight of the first task and the updated scheduling weight of the second task according to the sequence from big to small, and adding the first task into the task queue according to the sequencing result to obtain an updated task queue;
and executing the task at the head of the queue in the updated task queue.
2. The scheduling method of claim 1, wherein,
The calculating, according to the value of the at least one weight calculation parameter of the first task and the weight score table, the scheduling weight of the first task includes:
inquiring a score calculation rule of the at least one weight calculation parameter in the weight score table to determine weight scores corresponding to the values of the weight calculation parameters of the first task;
and calculating the scheduling weight of the first task according to the weight scores corresponding to the values of the weight calculation parameters of the first task.
3. The scheduling method of claim 1, wherein,
the at least one weight calculation parameter includes at least one of a human priority, an amount of data, an amount of calculation, a task type, a task source, and an attributed function, wherein,
the value of the manual priority is positively correlated with the corresponding weight score;
the value of the data quantity is inversely related to the corresponding weight score;
the value of the calculated quantity is inversely related to the corresponding weight score.
4. A scheduling method according to any one of claims 1 to 3, characterized in that,
the updating the scheduling weight of the second task according to the value of the waiting time length of the second task existing in the task queue and the weight score table comprises the following steps:
Determining the submitting time of the first task, and determining the waiting time of the second task according to the submitting time of the first task, wherein the value of the waiting time of the second task is the time difference between the submitting time of the first task and the submitting time of the second task;
and calculating updated scheduling weight of the second task according to the weight score corresponding to the value of the at least one weight calculation parameter of the second task and the weight score corresponding to the value of the waiting time of the second task.
5. A scheduling method according to any one of claims 1 to 3, characterized in that,
the updating the scheduling weight of the second task according to the value of the waiting time length of the second task existing in the task queue and the weight score table comprises the following steps:
determining the submitting time of the first task, and determining the waiting time of the second task according to the submitting time of the first task, wherein the waiting time of the second task is the time difference between the submitting time of the first task and the time when the scheduling weight of the second task is updated last time;
and calculating the updated scheduling weight of the second task according to the scheduling weight updated last time of the second task and the weight score corresponding to the value of the waiting time of the second task.
6. A scheduling method according to any one of claims 1-3, further comprising:
adding a newly added weight calculation parameter and a score calculation rule corresponding to the newly added weight calculation parameter into the weight score table, and/or,
and changing the score calculation rule of any weight calculation parameter in the weight score table.
7. A scheduling method according to any one of claims 1-3, wherein said executing the task at the head of the queue in the updated task queue comprises:
determining a task at the head of a queue in the updated task queue as a task to be processed, and determining a first resource amount required for processing the task to be processed;
judging whether the first resource amount is currently provided in the resource pool;
when the judgment result is yes, corresponding resources are called from the resource pool, and the task to be processed is executed;
and if the judgment result is negative, the task to be processed is not executed, and the resource release is waited.
8. The display method according to claim 7, wherein when the determination result is no, the task to be processed is temporarily not executed, and resources are waited for to be released, comprising:
After waiting for a preset time period, executing the step of judging whether the first resource amount is currently in the resource pool or not again;
if the judgment result is negative, judging again after waiting for the preset time length until the judgment result is positive;
and calling corresponding resources from the resource pool and executing the task to be processed.
9. The display method according to claim 7, wherein when the determination result is no, the task to be processed is temporarily not executed, and resources are waited for to be released, comprising:
after waiting for a preset time period, re-determining that a task at the head of a queue in a current updated task queue is an updated task to be processed, and determining an updated first resource amount required for processing the updated task to be processed;
judging whether the updated first resource amount is currently provided in the resource pool;
when the judgment result is yes, corresponding resources are called from the resource pool, and the updated task to be processed is executed;
and if the judgment result is negative, temporarily not executing the updated task to be processed, returning to the step of executing the updated first resource amount required for processing the updated task to be processed by re-determining the task at the head of the queue in the current updated task queue as the updated task to be processed after waiting for the preset time.
10. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the scheduling method of any one of claims 1-9.
11. A computer-readable storage medium comprising,
the computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to implement the scheduling method of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311330660.1A CN117271096A (en) | 2023-10-13 | 2023-10-13 | Scheduling method, electronic device, and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311330660.1A CN117271096A (en) | 2023-10-13 | 2023-10-13 | Scheduling method, electronic device, and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117271096A true CN117271096A (en) | 2023-12-22 |
Family
ID=89200787
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311330660.1A Pending CN117271096A (en) | 2023-10-13 | 2023-10-13 | Scheduling method, electronic device, and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117271096A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117785487A (en) * | 2024-02-27 | 2024-03-29 | 融科联创(天津)信息技术有限公司 | Method, device, equipment and medium for scheduling computing power resources |
-
2023
- 2023-10-13 CN CN202311330660.1A patent/CN117271096A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117785487A (en) * | 2024-02-27 | 2024-03-29 | 融科联创(天津)信息技术有限公司 | Method, device, equipment and medium for scheduling computing power resources |
CN117785487B (en) * | 2024-02-27 | 2024-05-24 | 融科联创(天津)信息技术有限公司 | Method, device, equipment and medium for scheduling computing power resources |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111400022A (en) | Resource scheduling method and device and electronic equipment | |
US10459915B2 (en) | Managing queries | |
WO2020052301A1 (en) | Resource scheduling method and device | |
CN110413412B (en) | GPU (graphics processing Unit) cluster resource allocation method and device | |
US20150347189A1 (en) | Quality of service classes | |
CN110570075B (en) | Power business edge calculation task allocation method and device | |
CN112380020A (en) | Computing power resource allocation method, device, equipment and storage medium | |
CN117271096A (en) | Scheduling method, electronic device, and computer-readable storage medium | |
KR100731983B1 (en) | Hardwired scheduler for low power wireless device processor and method of scheduling using the same | |
WO2020172852A1 (en) | Computing resource scheduling method, scheduler, internet of things system, and computer readable medium | |
US20220083375A1 (en) | Method and apparatus for scheduling task processing entity | |
CN112559176B (en) | Instruction processing method and device | |
CN112860387A (en) | Distributed task scheduling method and device, computer equipment and storage medium | |
CN112925616A (en) | Task allocation method and device, storage medium and electronic equipment | |
CN113177632A (en) | Model training method, device and equipment based on pipeline parallelism | |
CN113032102A (en) | Resource rescheduling method, device, equipment and medium | |
CN114327894A (en) | Resource allocation method, device, electronic equipment and storage medium | |
CN111597044A (en) | Task scheduling method and device, storage medium and electronic equipment | |
CN110941483A (en) | Queue processing method, device and equipment | |
CN116610422A (en) | Task scheduling method, device and system | |
CN109358961B (en) | Resource scheduling method and device with storage function | |
CN110851245A (en) | Distributed asynchronous task scheduling method and electronic equipment | |
CN114461356B (en) | Control method for number of processes of scheduler and IaaS cloud platform scheduling system | |
CN114911591A (en) | Task scheduling method and system | |
CN113032098A (en) | Virtual machine scheduling method, device, equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |