CN116661977A - Task management method, device, computing equipment and storage medium - Google Patents
Task management method, device, computing equipment and storage medium Download PDFInfo
- Publication number
- CN116661977A CN116661977A CN202310929594.3A CN202310929594A CN116661977A CN 116661977 A CN116661977 A CN 116661977A CN 202310929594 A CN202310929594 A CN 202310929594A CN 116661977 A CN116661977 A CN 116661977A
- Authority
- CN
- China
- Prior art keywords
- task
- priority
- time
- result
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007726 management method Methods 0.000 title claims abstract description 30
- 238000003860 storage Methods 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 claims description 45
- 238000002360 preparation method Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 13
- 238000011144 upstream manufacturing Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 229910052739 hydrogen Inorganic materials 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 229910052799 carbon Inorganic materials 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 229910052731 fluorine Inorganic materials 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 125000002015 acyclic group Chemical group 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000007795 chemical reaction product Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/48—Indexing scheme relating to G06F9/48
- G06F2209/484—Precedence
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The present disclosure relates to a task management method, apparatus, computing device, and storage medium. The task management method comprises the following steps: acquiring execution time length and result information of a task, wherein the result information comprises a result level and a result required time; determining priority information of a task based on execution time length and result information of the task, wherein the priority information comprises a priority corresponding to a result level and priority requirement time of the priority, and the priority requirement time of the priority is determined based on the execution time length and the result requirement time of the result level corresponding to the priority; determining a current priority of each of a plurality of tasks in a queue to be executed based on a current time and priority information of each of the plurality of tasks; and selecting a task with the highest current priority from the plurality of tasks as a task to be executed currently based on the current priority of each task in the plurality of tasks.
Description
Technical Field
The present disclosure relates to the field of big data, and more particularly, to a task management method, a task management apparatus, a computing device for managing tasks, and a non-transitory storage medium.
Background
In big data (e.g., cloud services) scenarios, there are many, typically thousands, tens of thousands or even more, of tasks for operation and maintenance management. In order to ensure the normal output of business data, a huge number of tasks need to be reasonably scheduled. The following scheduling strategies are commonly used. The first scheduling strategy is a strategy for randomly scheduling tasks. Under the condition of smaller task quantity, the strategy can meet the requirement of realizing task scheduling. However, due to the random task scheduling manner, there may be a situation that important tasks are not scheduled, less important tasks occupy a lot of resources, and therefore, the output of important business data cannot be guaranteed preferentially. The second scheduling policy is a policy that schedules tasks according to priority. Such a strategy requires that the staff member set the priority of each task in advance according to the importance of the task. When resources conflict, important tasks can be scheduled preferentially according to preset priorities.
Disclosure of Invention
According to a first aspect of the present disclosure, there is provided a task management method, including: acquiring execution time length and result information of a task, wherein the result information comprises a result level and a result required time; determining priority information of a task based on execution time length and result information of the task, wherein the priority information comprises a priority corresponding to a result level and priority requirement time of the priority, and the priority requirement time of the priority is determined based on the execution time length and the result requirement time of the result level corresponding to the priority; determining a current priority of each of a plurality of tasks in a queue to be executed based on a current time and priority information of each of the plurality of tasks; and selecting a task with the highest current priority from the plurality of tasks as a task to be executed currently based on the current priority of each task in the plurality of tasks.
In some embodiments, the method further comprises: in the case where a task having result information has a downstream task and the downstream task has result information, an earlier one of a result required time of a result level of the task and a result required time of a result level of the downstream task that is the same as the result level is determined as a result required time of the result level of the task. In some embodiments, the method further comprises: in the case that a task having result information has a downstream task and the downstream task has result information, determining, as the result required time of the result level of the task, an earlier one of a result required time of the result level of the task and a result required time of the same result level as the result level of the downstream task, which is advanced by an execution time length of the downstream task. In some embodiments, the method further comprises: in the case that a task without result information has a downstream task and the downstream task has result information, the result information of the downstream task is given to the task, and the result requirement time of the result level of the task is advanced by the execution time length of the downstream task.
In some embodiments, the priority requirement time of the priority is further determined based on at least one of an advance margin of the task and a schedule preparation time.
In some embodiments, determining the priority information of the task further comprises: if a task has a downstream task, the earlier of the priority request time of the priority of the task and the priority request time of the same priority as the priority of the downstream task is determined as the priority request time of the priority of the task. In some embodiments, determining the priority information of the task further comprises: in the case of a task having an earliest start execution time, an additional priority of the task is determined, which is determined to be lower than other priorities of the task, and an additional priority requirement time thereof, which is determined to be the earliest start execution time. In some embodiments, determining the priority information of the task further comprises: the priority of the task without the outcome information is determined to be lower than the priority of the task with the outcome information.
In some embodiments, the method further comprises: in the case where there are a plurality of tasks with the highest current priority, a task with the earliest priority requirement time of the current priority is selected from the plurality of tasks with the highest current priority as a task to be executed currently. In some embodiments, the method further comprises: when there are a plurality of tasks with highest current priorities, selecting a task with the earliest result requirement time of the result level corresponding to the current priority from the tasks with the highest current priorities as a task to be executed currently.
In some embodiments, determining the current priority of each task includes: if the current time is later than the first priority requirement time of the task and is not later than the second priority requirement time of the task, which is adjacent to and later than the first priority requirement time, the current priority of the task is determined to be the priority corresponding to the second priority requirement time. In some embodiments, determining the current priority of each task includes: if the current time is not later than the earliest priority requirement time of the task, the current priority of the task is determined to be the priority corresponding to the earliest priority requirement time. In some embodiments, determining the current priority of each task includes: if the current time is later than the latest priority requirement time of the task, the current priority of the task is determined to be higher than the priority corresponding to the latest priority requirement time.
According to a second aspect of the present disclosure, there is provided a task management device including an acquisition unit, a determination unit, and a scheduling unit. The acquisition unit is configured to acquire execution duration of a task and result information including a result level and a result required time. The determining unit is configured to determine priority information of the task based on execution duration of the task and result information, wherein the priority information comprises a priority corresponding to a result level and a priority requirement time thereof, and the priority requirement time of the priority is determined based on the execution duration and the result requirement time of the result level corresponding to the priority. The scheduling unit is configured to: determining a current priority of each of a plurality of tasks in a queue to be executed based on a current time and priority information of each of the plurality of tasks; and selecting a task with the highest current priority from the plurality of tasks as a task to be executed currently based on the current priority of each task in the plurality of tasks.
According to a third aspect of the present disclosure, there is provided a computing device for managing tasks, comprising: one or more processors; and a memory storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform the task management method according to the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory storage medium having stored thereon computer-executable instructions which, when executed by a computer, cause the computer to perform the task management method according to the first aspect of the present disclosure.
Other features of the present disclosure and its advantages will become more apparent from the following detailed description of exemplary embodiments of the disclosure, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart illustrating a task management method according to an embodiment of the present disclosure;
FIGS. 2-5 depict non-limiting example processes for determining priority information for tasks according to embodiments of the present disclosure;
FIG. 6 is a schematic block diagram illustrating a task management device according to an embodiment of the present disclosure;
FIG. 7 is a schematic block diagram illustrating a computing device for managing tasks in accordance with an embodiment of the present disclosure;
FIG. 8 is a schematic block diagram illustrating a computer system upon which embodiments of the present disclosure may be implemented.
Note that in the embodiments described below, the same reference numerals are used in common between different drawings to denote the same parts or parts having the same functions, and a repetitive description thereof may be omitted. In this specification, like reference numerals and letters are used to designate like items, and thus once an item is defined in one drawing, no further discussion thereof is necessary in subsequent drawings.
For ease of understanding, the positions, dimensions, ranges, etc. of the respective structures shown in the drawings and the like may not represent actual positions, dimensions, ranges, etc. Accordingly, the disclosed invention is not limited to the disclosed positions, dimensions, ranges, etc. as illustrated in the drawings. Moreover, the figures are not necessarily to scale, some features may be exaggerated to show details of particular components.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. Those skilled in the art will appreciate that they are merely illustrative of exemplary ways in which the present disclosure may be practiced, and not exhaustive.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In a real business scenario, a business party may sign an agreement with a data provider, requiring the data provider to provide certain data at a contracted time. According to the protocol, the data provider must complete the task for producing the data before the contracted time in order to perform as expected, otherwise it will take on the corresponding consequences. That is, the protocol specifies the yield time requirements of the task. Such contracted time may be referred to herein as an outcome requirement time, and the severity of the outcome that occurs when the task is not completed prior to the outcome requirement time may be represented as an outcome level, with higher outcome levels representing more serious outcomes. Generally, the higher the level of the consequences, the later the consequences require time, but other protocol scenarios are not excluded.
The same task is produced at different times, with different consequences. For example, assuming that there is a task Q that is executed for 1 millisecond (ms), the business side asks the task Q to yield data before 10 am, the data provider needs to make a first reimbursement if the task Q yields between 10 am and 11 am, and the data provider needs to make a second reimbursement if the task Q yields after 11 am (typically, the second reimbursement is much higher than the first reimbursement). However, common random scheduling task policies and scheduling task policies according to priority do not take this into account. In a randomly scheduled task strategy, one task is not prioritized with more serious consequences than the other tasks. In scheduling task strategies according to priority, each task is assigned a unique priority according to its importance level before starting scheduling, and the priority of each task remains unchanged during scheduling. Such a policy may result in resources for tasks that are higher in the outcome level (which may be considered more important and thus have a higher priority) but require later time to squeeze the outcome level lower in the outcome level (which may be considered less important and thus have a lower priority) but require earlier time. For example, suppose that for a task U with an execution duration of 1 hour (h), the business party wishes to yield before 20 pm (the time required for the outcome), otherwise data provider reimbursement is required (the outcome level is high); for task V, which is executed for 30 minutes (min), the business party would like to yield before 10 am (with consequent demand for time), otherwise the complaint data provider (with consequent level of middle). According to a common task scheduling policy according to priority, task U is assigned a higher priority than task V because the penalty of task U is more serious than task V, and if the current time is 9 am for 20 minutes and task U and task V are both ready, task U will be executed preferentially, resulting in failure of task V to be produced as scheduled and complaint by business party. If task V is executed before task U, then both tasks can yield as expected, but this violates the usual priority-based scheduling task strategy.
In addition, in a multitasking scenario, there is often a dependency relationship between tasks. For example, task a requires the outcome of task b as input, i.e., task a depends on the outcome of task b, then task a may be considered a downstream task of task b, while task b may be considered an upstream task of task a. While task a may have a higher priority because of its high importance, task b may have a lower priority because of its low importance. This may result in task b being postponed executing when competing for resources with other tasks due to the lower priority, which in turn may result in downstream task a (although having a higher priority) not being able to enter the competing queue earlier because upstream task b is not completed, which may even be violated by task a beginning late.
In view of one or more of the above, the present disclosure proposes a task management method, which may set a time-varying priority for a task according to a result level and a result output time of the task, and may directly or indirectly transfer the priority from downstream to upstream according to a dependency relationship of the task, so that task scheduling is more reasonable in a case of limited resources. A task management method 100 according to an embodiment of the present disclosure will be described in detail below in conjunction with fig. 1.
As shown in fig. 1, the method 100 includes: at step S102, execution duration and result information of the task are acquired.
The execution time length of a task refers to the length of time required for completion of the task from the start of the task. In some examples, the execution duration of the task may be determined from the historical execution duration of each task, such as, but not limited to, determining an average historical execution duration of the task, a maximum historical execution duration, etc., as the execution duration of the task. In other examples, an intelligent model, such as a neural network model, may also be constructed to predict the execution time of the task and determine the execution time prediction value as the execution time of the task.
The outcome information for the task includes an outcome level and an outcome requirement time, which may be determined based on the yield time requirements set for the task. The outcome information for a task may include one or more different outcome levels, each outcome level having a different outcome requirement time. Such tasks may have no outcome information when no yield time requirements are set for the task.
For example, the execution duration may be represented by L, the result level by R, and the result demand time by T, then the Task may be represented as task= { L, [ (R) 1 , T 1 ), (R 2 , T 2 ), …, (R n , T n )]And n is a non-negative number. In general, the higher the level of the result, the later the time the result requires, and there may be R i >R i-1 ,T i >T i-1 Where i=2, 3, …, n. Assuming that there is a Task c with execution duration of 30min, requiring that Task c output data before 9 am, the outcome level is low if Task c outputs between 9 am and 11 am, the outcome level is medium if Task c outputs between 11 am and 13 pm, and the outcome level is high if Task c outputs after 13 pm, task c may be represented as Task (c) = {30min, [ (low, 9:00), (medium, 11:00), (high, 13:00)]}. In particular, when n is zero, it is stated that the Task has no outcome information, that is, the Task is not provided with a yield time requirement so that it can be produced at any time, so that the Task having no outcome information can be expressed as task= { L }.
In addition, when the task has a dependency relationship with other tasks, the result information of the task with the dependency relationship can be further considered to update the result information of the task.
In some embodiments, where a task having outcome information has a downstream task and the downstream task has outcome information, an earlier one of an outcome requirement time for an outcome level of the task and an outcome requirement time for the downstream task for the same outcome level as the outcome level may be determined as the outcome requirement time for the outcome level of the task. For example, assuming that Task c is Task (c) = {30min, [ (low, 9:00), (middle, 11:00), (high, 13:00) ] } and downstream Task d is Task (d) = {10min, [ (middle, 10:40), (high, 11:30) ] } then the outcome information of Task c may be updated to Task (c) = {30min, [ (low, 9:00), (middle, 10:40), (high, 11:30) ] } based on the outcome information of downstream Task d. In this way, the downstream task's outcome information may be communicated upstream, advantageously preventing the occurrence of high-level consequences for the downstream task.
In some embodiments, where a task having outcome information has a downstream task and the downstream task has outcome information, an earlier one of an outcome requirement time for an outcome level of the task and an outcome requirement time for the same outcome level as the outcome level of the downstream task that is advanced by a time obtained by the execution duration of the downstream task may be determined as the outcome requirement time for the outcome level of the task. For example, assuming Task c is Task (c) = {30min, [ (low, 9:00), (medium, 11:00), (high, 13:00) ] } and downstream Task d is Task (d) = {10min, [ (medium, 10:40), (high, 11:30) ] } the result of downstream Task d is required to be time-advanced by the execution time of downstream Task d to obtain Task (d) = {10min, [ (medium, 10:30), (high, 11:20) ] } then the result information of Task c may be updated to Task (c) = {30min, [ (low, 9:00), (medium, 10:30), (high, 11:20) ]}. In this way, the downstream task's outcome information may also be communicated upstream, advantageously preventing the occurrence of high-level consequences for the downstream task.
In some embodiments, in the event that a task without outcome information has a downstream task and the downstream task has outcome information, the outcome information of the downstream task may be assigned to the task and the outcome requirement time of the outcome level of the task may be advanced by the execution duration of the downstream task. For example, assume that there is a Task us without outcome information as Task (us) = { L us Downstream Task ds is Task (ds) = { L } ds , [(R 1 , T 1 ), (R 2 , T 2 ), …, (R n , T n )]Task us may be adjusted to Task (us) = { L based on the outcome information of its downstream Task ds us , [(R 1 , T 1 -L ds ), (R 2 , T 2 -L ds ), …, (R n , T n -L ds )]Thus, the task us without the result information becomes a "task with the result information" because the result information of the downstream task ds is passed upstream to the task us. In some embodiments, in the case that a task without outcome information has a downstream task and the downstream task has outcome information, the outcome information of the downstream task may also be directly given to the task.
With continued reference to fig. 1, the method 100 further includes: at step S104, priority information of the task is determined based on the execution time length of the task and the result information.
The priority information includes a priority corresponding to the outcome level and its priority demand time. The granularity of the priority is consistent with the granularity of the outcome level, and the higher the outcome level, the higher the corresponding priority. The priority demand time of the priority may be determined based on the execution time of the task and the result demand time of the result level corresponding to the priority. In the case where the outcome information of a task includes a plurality of different outcome levels, each outcome level having a different outcome requirement time, the priority information of the task also includes a plurality of different priorities, each priority having a different priority requirement time. Generally, the higher the outcome level, the later the outcome requirement time, the higher the priority, and the later the priority requirement time.
For example, the priority is denoted by P and the priority demand time is denoted by t, then Task = { L, [ (R) 1 , T 1 ), (R 2 , T 2 ), …, (R n , T n )]Determining Priority information= [ (P) 1 , t 1 ), (P 2 , t 2 ), …, (P n , t n )]Wherein P is i Corresponding to R i Setting, t i =T i -L, i=1, 2, …, n. Thus, for Task c (Task (c) = {30min, [ (low, 9:00), (medium, 11:00), (high, 13:00)]}), the Priority information of which may be determined as Priority (c) = [ (low, 8:30), (medium, 10:30), (high, 12:30)]。
In some embodiments, the priority demand time of the priority may be further based on the taskAt least one of the advance margin and the schedule preparation time is determined. The advance margin of the task may be set to require re-execution in view of the possible failure of the task, which may be embodied as the number of task runs, for example. The scheduling preparation time may be set to take into account the waiting time of the task from being ready to formally starting scheduling, which may be the average task waiting time of the entire scheduling system. In the case where the scheduling system resources are sufficient, such latency may be zero. But in case of tight scheduling system resources such latency may be large. Assuming that the advance margin of the task is x and the scheduling preparation time is y, there may be t i =T i -x.L-y. Thus, for Task c (Task (c) = {30min, [ (low, 9:00), (medium, 11:00), (high, 13:00)]Assuming x=3 and y=10 min, its Priority information can be further determined as Priority (c) = [ (low, 7:20), (medium, 9:20), (high, 11:20)]. For example, each of the advance margin of the task and the schedule preparation time may be set based on historical data and/or experience, or may be predicted by an intelligent model such as a neural network model.
In some embodiments, the priority of tasks without outcome information may be determined to be lower than the priority of tasks with outcome information. For example, the lowest level of priority in the designed priority hierarchy may be used for tasks without outcome information, while the remaining higher priorities are used for tasks with outcome information. In this way, it can be ensured in any case that tasks without outcome information are performed after tasks with outcome information.
In some embodiments, where a task has an earliest start execution time (i.e., it is required that the task must not be executed before the earliest start execution time), additional priorities of the tasks and their additional priority requirement times may also be determined. The additional priority may be determined to be lower than other priorities of the task. The additional priority demand time may be determined as the earliest start execution time. For example, the priority hierarchy may be designed to have 5 levels in total from level 1 to level 5, with higher numbers giving higher priority. For a Task task= { L } without result information, its Priority information may be expressed as priority= [ P ] (no Priority requirement time is set), and the Priority of the Task may default to level 1 (priority= [ level 1 ]). For tasks with outcome information, such as Task c (Task (c) = {30min, [ (low, 9:00), (medium, 11:00), (high, 13:00) ] }), its Priority information may be represented as Priority (c) = [ (level 3, 7:20), (level 4, 9:20), (level 5, 11:20) ], and if its earliest start execution time is 3 a.m., the Priority information including the additional Priority information may be represented as Priority (c) = [ (level 2, 3:00), (level 3, 7:20), (level 4, 9:20), (level 5, 11:20) ]. In this way, it can be ensured in any case that a task that is not up to the earliest start execution time is executed after a task that has reached the earliest start execution time.
In addition, when a task has a dependency relationship with other tasks, the priority information of the task having a dependency relationship with it may be further considered in determining the priority information of the task. In some embodiments, where a task has a downstream task, the earlier of the priority demand time for the priority of the task and the priority demand time for the same priority as the priority of the downstream task is determined as the priority demand time for the priority of the task. For example, the Priority information of the task c has been determined as Priority (c) = [ (low, 7:20), (medium, 9:20), (high, 11:20) ]accordingto the previous procedure, and the Priority information of the downstream task d of the task c is Priority (d) = [ (medium, 10:00), (high, 10:50) ], the Priority information of the task c can be updated to Priority (c) = [ (low, 7:20), (medium, 9:20), (high, 10:50) ] based on the Priority information of the downstream task d, that is, the Priority of the downstream task d is transferred upward to the task c, so that it is possible to prevent the high-level consequences of the downstream task d from occurring due to untimely output caused by the shortage of the Priority of the upstream task c when competing with other tasks.
For non-limiting illustration purposes, fig. 2-5 illustrate an example process of determining priority information for a task based on the execution duration of the task and the outcome information in the form of directed acyclic graphs. It is assumed that in such an example process, the priority hierarchy is designed to include a high, medium, and low total level.
Referring first to fig. 2, eight tasks a through H are shown. The arrow direction indicates the dependency between these tasks, in particular, each of task B, task C, task D depends on the output of task a, task E depends on the output of task B, task F depends on the output of both task B and task D, task G depends on the output of both task D and task H, and thus each of task B, task C, task D is a downstream task of task a, each of task E and task F is a downstream task of task B, each of task F and task G is a downstream task of task D, or task G is a downstream task of task H.
As shown in fig. 2, since there is a yield time requirement for the task C, H, F, G, the execution duration and outcome information of each of the tasks C, H, F, G are labeled in fig. 2. Specifically, task C is Task (C) = {10min, [ (middle, 6:30) ]} (execution duration is 10min, outcome level produced after outcome requirement time 6:30 is middle), task H is Task (H) = {20min, [ (high, 6:00) ]} (execution duration is 20min, outcome level produced after outcome requirement time 6:00 is high), task F is Task (F) = {30min, [ (middle, 6:00), (high, 7:00) ]} (execution duration is 30min, outcome level produced after outcome requirement time 6:00 is middle, outcome level produced after outcome requirement time 7:00 is high), task G is Task (G) = {50min, [ (high, 7:00) ]} (execution duration is 50min, outcome level produced after outcome requirement time 7:00 is high). In addition, since no throughput time requirements are set for tasks A, B, D, E and thus they have no outcome information, only the execution duration of each of tasks A, B, D, E is labeled in fig. 2.
Referring next to fig. 3, the transfer of the outcome information from downstream to upstream can be seen:
task B is updated to Task (B) = {20min, [ (in 5:30), (high, 6:30) ] } (execution duration is 20min, in the outcome level produced after outcome requirement time 5:30, and outcome level produced after outcome requirement time 6:30 is high), based on the outcome information of its downstream Task E, F, wherein Task B is actually updated based on the outcome information of its downstream Task F alone, since downstream Task E has no outcome information;
task D is updated to Task (D) = {20min, [ (in 5:30), (high, 6:10) ] } (execution duration is 20min, in the outcome level produced after outcome requirement time 5:30, and outcome level produced after outcome requirement time 6:10 is high), based on the outcome information of its downstream Task F, G, wherein Task D is updated to Task (D) = {20min, [ (in 5:30), (high, 6:30) ] }, task D is updated to Task (D) ' = {20min, [ (high, 6:10) ] }, based on the outcome information of its downstream Task G alone, and then Task (D) ' and Task (D) ' are combined with the outcome requirement time of the same outcome level of the outcome requirement time and take earlier values to obtain Task (D);
Task H will hold Task (H) = {20min, [ (high, 6:00) ] }, where Task H is updated to Task (H)' = {20min, [ (high, 6:10) ] }, based on the outcome information of its downstream Task G, which requires time to merge with the outcome of the same outcome level of Task (H) = {20min, [ (high, 6:00) ] }, taking earlier values still holding Task (H);
task a is updated to Task (a) = {10min, [ (in 5:10), (high, 5:50) ] } (execution duration is 10min, outcome level produced after outcome requirement time 5:10 is in outcome level produced after outcome requirement time 5:50 is high), wherein Task a is updated to Task (a) = {10min, [ (in 5:10), (high, 6:10) ] }, task a is updated to Task (a) ' = {10min, [ (in 6:20) ], task a is updated to Task (a) ' = {10min, [ (in 5:10), (high, 5:50) ], task a is updated to Task (a) ' =) ' based on outcome information of Task D), and the same outcome values of Task a (a) ' ' = {10min, [ (in 5:10) ], task a) are then obtained by merging the outcome values of Task a) ' (a) ', task a) ' ' (in advance of Task a) '.
Referring next to fig. 4, determining priority information of a task based on execution duration and result information of the task, assuming that an advance margin x=3 of the task, a schedule preparation time is y=10min, it can be seen that:
The Priority information of task a is determined as Priority (a) = [ (in, 4:30), (high, 5:10) ];
the Priority information of task B is determined as Priority (B) = [ (in, 4:20), (high, 5:20) ];
the Priority information of the task C is determined as Priority (C) = [ (in 5:50) ];
the Priority information of the task D is determined as Priority (D) = [ (in, 4:20), (high, 5:00) ];
the Priority information of the task H is determined as Priority (H) = [ (high, 4:50) ];
the Priority information of the task F is determined as Priority (F) = [ (medium, 4:20), (high, 5:20) ];
the Priority information of the task G is determined as Priority (H) = [ (high, 4:20) ];
the Priority information of the task E is determined as Priority (E) = [ low ], where the task E is a task without the result information, and therefore, is not set with Priority demand time, and its Priority may be determined to be lower than that of a task with the result information and taken directly herein as the lowest-level priority—low.
Since the result information has been transferred from downstream to upstream in fig. 3, determining the priority information of the task based on the execution time of the task and the result information updated based on the result information of the downstream task in fig. 4 corresponds to indirectly transferring the priority from downstream to upstream.
Optionally, referring further to fig. 5, the priorities may be communicated downstream to upstream, as can be seen:
task B updates its priority information based on the priority information of its downstream task E, F, but since the priority requirement time of the downstream task at the same priority is not earlier than the priority requirement time of task B, the priority information of task B does not change;
task D updates its Priority information based on its downstream task F, G by first taking the earlier value of Priority (D) = [ (medium, 4:20), (high, 4:20) ] after merging the Priority demand times of the same Priority, and then overlaying the medium Priority with the high Priority because the Priority demand time of the high Priority is no later than the Priority demand time of the medium Priority, so that the Priority information of task D is finally determined as Priority (D) = [ (high, 4:20) ];
task H updates its Priority information based on the Priority information of its downstream task G by merging the Priority requirements times of the same Priority and taking the earlier value of Priority (H) = [ (high, 4:20) ];
task a updates its Priority information based on the Priority information of its downstream task B, C, D by first taking the earlier value of Priority (a)' = [ (medium, 4:20), (high, 4:20) ] after merging the Priority demand times of the same Priority, and then overlaying the medium Priority with the high Priority because the Priority demand time of the high Priority is no later than the Priority demand time of the medium Priority, so that the Priority information of task a is finally determined as Priority (a) = [ (high, 4:20) ].
Referring back to fig. 1, the method 100 further includes: at step S106, a current priority of each of the plurality of tasks is determined based on the current time and priority information of each of the plurality of tasks currently in the queue to be executed.
In some embodiments, if the current time is later than the first priority requirement time of the task and not later than a second priority requirement time of the task adjacent to and later than the first priority requirement time, the current priority of the task may be determined as the priority corresponding to the second priority requirement time. In some embodiments, if the current time is not later than the earliest priority requirement time for the task, the current priority of the task is determined to be the priority corresponding to the earliest priority requirement time. In some embodiments, if the current time is later than the latest priority requirement time of the task (which means that the most serious consequences may occur), the current priority of the task may be determined to be equal to or higher than the priority corresponding to the latest priority requirement time (the task is considered to be executed preferentially to mitigate the consequences of the task as much as possible), the current priority of the task may be determined to be equal to or lower than the priority corresponding to the earliest priority requirement time (the task is considered to be the least important task to ensure execution of other tasks that have not yet had serious consequences), and the current priority of the task may be determined to be any intermediate priority, depending on the specific policy.
For example, as a non-limiting example, assume that the current time is t c Priority information of task= [ (P) 1 , t 1 ), (P 2 , t 2 ), …, (P n , t n )]Wherein P is i >P i-1 ,t i >t i-1 Where i=2, 3, …, n. When t c ≤t 1 When the current priority is determined to be P 1 The method comprises the steps of carrying out a first treatment on the surface of the When t i-1 <t c ≤t i When the current priority is determined to be P i Wherein i=2, 3, …, n; when t i <t c When, depending on the particular policy, the current priority may be determined to be equal to or higher than P n Can also be determined to be equal to or lower than P 1 Can also be determined as P 2 To P n-1 Any one of them.
In the above case, the priority is a discrete value that is set separately. In other cases, the priority may also be a continuous value that varies over time. For example, in some embodiments, determining the current priority of each task may further include: if the current time is later than the first priority requirement time of the task and is not later than the second priority requirement time of the task, which is adjacent to and later than the first priority requirement time, the current priority of the task is determined based on the second priority corresponding to the second priority requirement time and the difference between the current time and the second priority requirement time. In some embodiments, determining the current priority of each task may further comprise: if the current time is not later than the earliest priority requirement time of the task, the current priority of the task is determined based on the priority corresponding to the earliest priority requirement time and the difference between the current time and the earliest priority requirement time. In some embodiments, determining the current priority of each task may further comprise: if the current time is later than the latest priority requirement time of the task, the current priority of the task is determined based on the priority corresponding to the latest priority requirement time and the difference between the current time and the latest priority requirement time.
For example, as a non-limiting example, assume that the current time is t c Priority information of task= [ (P) 1 , t 1 ), (P 2 , t 2 ), …, (P n , t n )]Wherein P is i >P i-1 ,t i >t i-1 Where i=2, 3, …, n. When t c ≤t 1 When the current priority is determined to be P 1 ·(1-(t 1 -t c )/t 1 ) The method comprises the steps of carrying out a first treatment on the surface of the When t i-1 <t c ≤t i When the current priority is determined as ((P) i -P i-1 )·(t c -t i-1 )/(t i -t i-1 )+P i-1 ) Wherein i=2, 3, …, n; when t i <t c When the current priority is determined to be P n ·(1+(t c -t n )/t n ). Note that the linear function utilized herein is merely exemplary and not limiting, and that other suitable functions (including non-linear functions) may be employed, so long as it is ensured that the current priority is lower as the current time is earlier relative to the consequent demand time or priority demand time.
With continued reference to fig. 1, the method 100 further includes: at step S108, a task with the highest current priority is selected from the plurality of tasks as a task to be currently executed based on the current priority of each of the plurality of tasks. The first few tasks of the current priority rank can also be selected together as the tasks to be currently executed under the condition of resource permission.
In some embodiments, in a case where there are a plurality of tasks with highest current priorities, a task with a priority requirement time with the highest current priority may be further selected from the plurality of tasks with highest current priorities as a task to be executed currently. In other embodiments, in the case that there are a plurality of tasks with highest current priorities, a task with the earliest result requirement time of the result level corresponding to the current priority may be further selected from the plurality of tasks with highest current priorities as a task to be executed currently. In addition, if the resources allow, the tasks with the highest current priorities can be selected to be executed in parallel.
Referring to fig. 4, for example, assume that the current time is 4: task a is complete and task B, C, D, H is ready, so the task currently in the queue to be executed is task B, C, D, H. At this time, the task B, C, D has a middle priority, and the task H has a high priority, and the task H may be selected as a task to be currently executed.
For another example, assume that the current time is 4:30, task a is complete and task B, C, D, H is ready, the task currently in the queue to be executed is task B, C, D, H. At this time, the task B, D, H is high in priority, while the task C is medium in priority, and the task B, D, H is the task with the highest current priority. Since the task having the earliest priority demand time of the "high" priority is the task H, the task H may be selected as the task to be currently executed.
With the task management method 100 of the present disclosure, a plurality of tasks to be scheduled can be easily managed. For example, a directed acyclic graph may be constructed based on dependencies between tasks, then execution duration and outcome information may be provided for the tasks at each node in the graph, and finally priority information for the tasks at each node may be determined based on the execution duration and outcome information, such that a schedule for the plurality of tasks may be obtained. In such a schedule, the priority of each task is time-varying, and the priority may be communicated directly or indirectly (e.g., via outcome information) from downstream to upstream. Such a schedule may be generated prior to starting scheduling (e.g., daily early morning), then during scheduling (e.g., daily), a current priority of each task at a current time may be determined from the pre-generated schedule, and the tasks currently to be executed may be determined according to the current priority of each task, thereby facilitating overall task management optimization. When information (e.g., execution duration, outcome information, etc.) changes, the schedule may be updated accordingly. It is to be understood that the directed acyclic graph-based schedule is exemplary and not limiting, and that the task management method 100 of the present disclosure can be applied to any suitable form.
The disclosure also provides a task management device. Referring to fig. 6, the task management device 200 includes an acquisition unit 202, a determination unit 204, and a scheduling unit 206. The obtaining unit 202 may be configured to obtain the execution duration of the task and the result information, which includes the result level and the result required time. The determining unit 204 may be configured to determine, based on the execution duration of the task and the result information, priority information of the task, the priority information including a priority corresponding to the result level and a priority requirement time thereof, wherein the priority requirement time of the priority is determined based on the execution duration and the result requirement time of the result level corresponding to the priority. The scheduling unit 206 may be configured to determine a current priority of each of a plurality of tasks currently in a queue to be executed based on a current time and priority information of each of the plurality of tasks, and select a task having a highest current priority from the plurality of tasks as a task to be currently executed based on the current priority of each of the plurality of tasks. Various embodiments of the task management device 200 may be similarly referred to the various embodiments of the task management method 100 described above, and will not be described in detail herein.
The present disclosure also provides a computing device for managing tasks, which may include one or more processors and a memory storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform the task management method 100 according to any of the foregoing embodiments of the present disclosure. As shown in fig. 7, computing device 300 may include processor(s) 302 and memory 304 storing computer-executable instructions that, when executed by processor(s) 302, cause processor(s) 302 to perform task management method 100 according to any of the foregoing embodiments of the present disclosure. The processor(s) 302 may be, for example, a Central Processing Unit (CPU) of the computing device 300. Processor(s) 302 may be any type of general purpose processor or may be a processor specifically designed for managing tasks, such as an application specific integrated circuit ("ASIC"). Memory 304 may include a variety of computer-readable media that are accessible by processor(s) 302. In various embodiments, the memory 304 described herein may include volatile and nonvolatile media, removable and non-removable media. For example, the memory 304 may include any combination of the following: random access memory ("RAM"), dynamic RAM ("DRAM"), static RAM ("SRAM"), read only memory ("ROM"), flash memory, cache memory, and/or any other type of non-transitory computer-readable medium. The memory 304 may store instructions that, when executed by the processor 302, cause the processor 302 to perform the task management method 100 according to any of the foregoing embodiments of the present disclosure.
The present disclosure also provides a non-transitory storage medium having stored thereon computer-executable instructions that, when executed by a computer, cause the computer to perform the task management method 100 according to any of the foregoing embodiments of the present disclosure.
FIG. 8 is a schematic block diagram illustrating a computer system 600 upon which embodiments of the present disclosure may be implemented. Computer system 600 includes a bus 602 or other communication mechanism for communicating information, and a processing device 604 coupled with bus 602 for processing information. Computer system 600 also includes a memory 606 coupled to bus 602 for storing instructions to be executed by processing device 604, where memory 606 may be a Random Access Memory (RAM) or other dynamic storage device. Memory 606 may also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processing device 604. Computer system 600 also includes a Read Only Memory (ROM) 608 or other static storage device coupled to bus 602 for storing static information and instructions for processing device 604. A storage device 610, such as a magnetic disk or optical disk, is provided and coupled to bus 602 for storing information and instructions. The computer system 600 may be coupled via bus 602 to an output device 612, such as, but not limited to, a display (such as a Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD)), speakers, and the like, for providing output to a user. An input device 614, such as a keyboard, mouse, microphone, etc., is coupled to bus 602 for communicating information and command selections to processing device 604. Computer system 600 may perform embodiments of the present disclosure. Consistent with certain implementations of the disclosure, the results are provided by computer system 600 in response to processing device 604 executing one or more sequences of one or more instructions contained in memory 606. Such instructions may be read into memory 606 from another computer-readable medium, such as storage device 610. Execution of the sequences of instructions contained in memory 606 causes processing device 604 to perform the methods described herein. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement the present teachings. Thus, implementations of the present disclosure are not limited to any specific combination of hardware circuitry and software. In various embodiments, computer system 600 may be connected across a network to one or more other computer systems, as computer system 600, via network interface 616 to form a networked system. The network may comprise a private network or a public network such as the internet. In a networked system, one or more computer systems may store data and supply the data to other computer systems. The term "computer-readable medium" as used herein refers to any medium that participates in providing instructions to processing device 604 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 610. Volatile media includes dynamic memory, such as memory 606. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 602. Common forms of computer-readable media or computer program product include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, digital Video Disk (DVD), blu-ray disk, any other optical medium, thumb drive, memory card, RAM, PROM, and EPROM, flash EPROM, any other memory chip or cartridge, or any other tangible medium from which a computer can read. Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processing device 604 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 600 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infrared detector coupled to bus 602 can receive the data carried in the infrared signal and place the data on bus 602. Bus 602 carries the data to memory 606, and processing device 604 retrieves instructions from memory 606 and executes the instructions. Optionally, instructions received by memory 606 may be stored on storage device 610 either before or after execution by processing device 604.
According to various embodiments, instructions configured to be executed by a processing device to perform a method are stored on a computer-readable medium. The computer readable medium can be a device that stores digital information. For example, computer readable media includes compact disk read only memory (CD-ROM) as is known in the art for storing software. The computer readable medium is accessed by a processor adapted to execute instructions configured to be executed.
The foregoing describes one or more exemplary embodiments of the disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
With the development of technology, many improvements of the current method flow can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of a computer readable medium such as a microprocessor or processor and computer readable program code (e.g., software or firmware) stored thereon that is executable by the microprocessor or processor, logic gates, switches, application specific integrated circuits, programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation device is a server system. Of course, this disclosure does not exclude that as future computer technology evolves, the computer implementing the functionality of the above-described embodiments may be, for example, a personal computer, laptop computer, in-vehicle human interaction device, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
Although one or more embodiments of the present disclosure provide method operational steps as described in the embodiments or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in an actual device or end product, the instructions may be executed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment, or even in a distributed data processing environment) as illustrated by the embodiments or by the figures.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, it is not excluded that additional identical or equivalent elements may be present in a process, method, article, or apparatus that comprises a described element. For example, if the terms "first," "second," and the like are used to indicate a name, they do not denote any particular order.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing one or more embodiments of the present disclosure, or a module that implements the same function may be implemented by a plurality of sub-modules or a combination of sub-units, or the like. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage, graphene storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
Those skilled in the art will appreciate that one or more embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present disclosure may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more embodiments of the disclosure may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The same or similar parts of the various embodiments of the disclosure may be referred to each other, each of which emphasis is placed upon illustrating the differences from the other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part. In the description of the present disclosure, descriptions of the terms "one embodiment," "some embodiments," "examples," "particular examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In the present disclosure, the schematic representations of the above terms are not necessarily for the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples described in this disclosure, as well as features of various embodiments or examples, may be combined and combined by those skilled in the art without contradiction.
The foregoing is merely an example of one or more embodiments of the present disclosure and is not intended to limit the one or more embodiments of the present disclosure. Various modifications and variations of one or more embodiments of the disclosure will be apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present disclosure, are intended to be included within the scope of the claims.
Claims (10)
1. A task management method comprising:
acquiring execution time length and result information of a task, wherein the result information comprises a result level and a result required time;
determining priority information of a task based on execution time length and result information of the task, wherein the priority information comprises a priority corresponding to a result level and priority requirement time of the priority, and the priority requirement time of the priority is determined based on the execution time length and the result requirement time of the result level corresponding to the priority;
determining a current priority of each of a plurality of tasks in a queue to be executed based on a current time and priority information of each of the plurality of tasks; and
and selecting a task with the highest current priority from the plurality of tasks as a task to be executed currently based on the current priority of each task in the plurality of tasks.
2. The method of claim 1, further comprising:
determining, in a case where a task having result information has a downstream task and the downstream task has result information, an earlier one of a result requirement time of a result level of the task and a result requirement time of a result level of the downstream task that is the same as the result level as a result requirement time of the result level of the task; or alternatively
In the case that a task having result information has a downstream task and the downstream task has result information, determining, as the result required time of the result level of the task, an earlier one of a result required time of the result level of the task and a result required time of the same result level as the result level of the downstream task, which is advanced by an execution time length of the downstream task.
3. The method of claim 1, further comprising:
in the case that a task without result information has a downstream task and the downstream task has result information, the result information of the downstream task is given to the task, and the result requirement time of the result level of the task is advanced by the execution time length of the downstream task.
4. The method of claim 1, wherein the priority requirement time of the priority is further determined based on at least one of an advance margin of a task and a scheduling preparation time.
5. The method of any of claims 1-4, wherein determining priority information for a task further comprises at least one of:
when a task has a downstream task, determining, as a priority request time of the priority of the task, an earlier one of a priority request time of the priority of the task and a priority request time of the same priority as the priority of the downstream task; or alternatively
In the case that a task has an earliest start execution time, determining an additional priority of the task and an additional priority requirement time thereof, wherein the additional priority is determined to be lower than other priorities of the task, and the additional priority requirement time is determined to be the earliest start execution time; or alternatively
The priority of the task without the outcome information is determined to be lower than the priority of the task with the outcome information.
6. The method of any one of claims 1 to 4, further comprising:
When a plurality of tasks with highest current priorities exist, selecting a task with the earliest priority requirement time of the current priority from the tasks with the highest current priorities as a task to be executed currently; or alternatively
When there are a plurality of tasks with highest current priorities, selecting a task with the earliest result requirement time of the result level corresponding to the current priority from the tasks with the highest current priorities as a task to be executed currently.
7. The method of any of claims 1-4, wherein determining a current priority of each task comprises:
if the current time is later than the first priority requirement time of the task and is not later than the second priority requirement time of the task, which is adjacent to and later than the first priority requirement time, determining the current priority of the task as the priority corresponding to the second priority requirement time; or alternatively
If the current time is not later than the earliest priority requirement time of the task, determining the current priority of the task as the priority corresponding to the earliest priority requirement time; or alternatively
If the current time is later than the latest priority requirement time of the task, the current priority of the task is determined to be higher than the priority corresponding to the latest priority requirement time.
8. A task management device comprising:
the acquisition unit is configured to acquire execution time of the task and result information, wherein the result information comprises a result level and a result required time;
a determining unit configured to determine priority information of a task based on execution time length of the task and result information, the priority information including a priority corresponding to a result level and a priority requirement time thereof, wherein the priority requirement time of the priority is determined based on the execution time length and the result requirement time of the result level corresponding to the priority; and
a scheduling unit configured to:
determining a current priority of each of a plurality of tasks in a queue to be executed based on a current time and priority information of each of the plurality of tasks; and
and selecting a task with the highest current priority from the plurality of tasks as a task to be executed currently based on the current priority of each task in the plurality of tasks.
9. A computing device for managing tasks, comprising:
one or more processors; and
a memory storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform the task management method of any of claims 1 to 7.
10. A non-transitory storage medium having stored thereon computer-executable instructions that, when executed by a computer, cause the computer to perform the task management method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310929594.3A CN116661977B (en) | 2023-07-26 | 2023-07-26 | Task management method, device, computing equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310929594.3A CN116661977B (en) | 2023-07-26 | 2023-07-26 | Task management method, device, computing equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116661977A true CN116661977A (en) | 2023-08-29 |
CN116661977B CN116661977B (en) | 2023-10-24 |
Family
ID=87717389
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310929594.3A Active CN116661977B (en) | 2023-07-26 | 2023-07-26 | Task management method, device, computing equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116661977B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107577523A (en) * | 2017-07-31 | 2018-01-12 | 阿里巴巴集团控股有限公司 | A kind of method and device of tasks carrying |
CN111930486A (en) * | 2020-07-30 | 2020-11-13 | 中国工商银行股份有限公司 | Task selection data processing method, device, equipment and storage medium |
CN112035237A (en) * | 2020-09-11 | 2020-12-04 | 中国银行股份有限公司 | Optimized scheduling method and device for audit sequence |
CN114968509A (en) * | 2021-05-08 | 2022-08-30 | 中移互联网有限公司 | Task execution method and device |
CN115309519A (en) * | 2022-07-15 | 2022-11-08 | 上海零念科技有限公司 | Deterministic task scheduling and arranging method and system based on time trigger mechanism and storage medium |
US20230030857A1 (en) * | 2021-07-23 | 2023-02-02 | EMC IP Holding Company LLC | Method, device and computer program product for storage system management |
-
2023
- 2023-07-26 CN CN202310929594.3A patent/CN116661977B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107577523A (en) * | 2017-07-31 | 2018-01-12 | 阿里巴巴集团控股有限公司 | A kind of method and device of tasks carrying |
CN111930486A (en) * | 2020-07-30 | 2020-11-13 | 中国工商银行股份有限公司 | Task selection data processing method, device, equipment and storage medium |
CN112035237A (en) * | 2020-09-11 | 2020-12-04 | 中国银行股份有限公司 | Optimized scheduling method and device for audit sequence |
CN114968509A (en) * | 2021-05-08 | 2022-08-30 | 中移互联网有限公司 | Task execution method and device |
US20230030857A1 (en) * | 2021-07-23 | 2023-02-02 | EMC IP Holding Company LLC | Method, device and computer program product for storage system management |
CN115686763A (en) * | 2021-07-23 | 2023-02-03 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer program product for managing a storage system |
CN115309519A (en) * | 2022-07-15 | 2022-11-08 | 上海零念科技有限公司 | Deterministic task scheduling and arranging method and system based on time trigger mechanism and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN116661977B (en) | 2023-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108345977B (en) | Service processing method and device | |
US10304014B2 (en) | Proactive resource allocation plan generator for improving product releases | |
AU2017261531B2 (en) | Prescriptive analytics based activation timetable stack for cloud computing resource scheduling | |
CN110597614B (en) | Resource adjustment method and device | |
CN109739627B (en) | Task scheduling method, electronic device and medium | |
CN110427258B (en) | Resource scheduling control method and device based on cloud platform | |
US20240137411A1 (en) | Container quantity adjustment for application | |
CN110659137A (en) | Processing resource allocation method and system for offline tasks | |
CN109615130A (en) | A kind of method, apparatus and system of timed reminding transacting business | |
CN112596898A (en) | Task executor scheduling method and device | |
CN116661977B (en) | Task management method, device, computing equipment and storage medium | |
CN112328289B (en) | Firmware upgrading method, device, equipment and storage medium | |
CN116932175B (en) | Heterogeneous chip task scheduling method and device based on sequence generation | |
CN117493015A (en) | Capacity expansion and contraction method, device, medium and equipment of container management system | |
CN116107728B (en) | Task execution method and device, storage medium and electronic equipment | |
US20160299787A1 (en) | System, method and managing device | |
CN116302457A (en) | Cloud primary workflow engine implementation method, system, medium and electronic equipment | |
CN116204324A (en) | Task execution method and device, storage medium and electronic equipment | |
CN112306677B (en) | Resource scheduling method and device | |
CN113127187B (en) | Method and device for cluster expansion and contraction capacity | |
CN117348999B (en) | Service execution system and service execution method | |
CN116996397B (en) | Network packet loss optimization method and device, storage medium and electronic equipment | |
CN114979160B (en) | Block chain task allocation method and device, electronic equipment and computer readable storage medium | |
CN116501474B (en) | System, method and device for processing batch homogeneous tasks | |
CN116167437B (en) | Chip management system, method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |