CN117492947A - Method for processing delay task, storage medium and processor - Google Patents
Method for processing delay task, storage medium and processor Download PDFInfo
- Publication number
- CN117492947A CN117492947A CN202311276902.3A CN202311276902A CN117492947A CN 117492947 A CN117492947 A CN 117492947A CN 202311276902 A CN202311276902 A CN 202311276902A CN 117492947 A CN117492947 A CN 117492947A
- Authority
- CN
- China
- Prior art keywords
- storage space
- time
- delay
- task
- delay task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 238000012545 processing Methods 0.000 title claims abstract description 53
- 230000003111 delayed effect Effects 0.000 claims description 41
- 230000005856 abnormality Effects 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 10
- 230000007246 mechanism Effects 0.000 claims description 8
- 230000002159 abnormal effect Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 14
- 238000004590 computer program Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The embodiment of the application provides a method, a storage medium and a processor for processing a delay task. The method comprises the following steps: acquiring a plurality of delay tasks to be executed; when the first time value of the execution time point of each delay task on the maximum time unit is different from the current time value of the current time point on the maximum time unit, the delay task is distributed to the corresponding first storage space; determining a first time difference value between the current time point and a first earliest execution time point in a plurality of delay tasks of each first storage space; determining a first target storage space according to the first time difference value; when determining that a second storage space corresponding to a second time value of the execution time of each delay task of the first target storage space in a minimum time unit exists, distributing the delay tasks to the corresponding second storage space; and when the execution time point of any delay task in the second storage space is reached, executing the delay task to improve the processing efficiency of the delay task.
Description
Technical Field
The present application relates to the field of task processing, and in particular, to a method, a storage medium, and a processor for processing a delayed task.
Background
In the prior art, when processing the delayed tasks, the tasks are placed in a blocking queue according to the sequence of the expiration time, and the delayed tasks are allowed to be taken out from the queue and executed after the processing time is reached. All expired tasks in each grid can also be executed by a time wheel algorithm, i.e. placing the tasks that expire in time within a period of time in a grid of a ring structure while rotating the hands grid by grid.
However, with the above scheme, the delay tasks are stored according to the time period, the number of storage spaces is excessive, which causes redundancy of the storage spaces, and when more tasks are to be executed at a certain moment, the delay tasks are easy to be lost or covered, and further the task processing efficiency is reduced due to the fact that the task processing time is delayed.
Disclosure of Invention
An object of an embodiment of the application is to provide a method, a storage medium and a processor for processing a delay task.
To achieve the above object, a first aspect of the present application provides a method for processing a latency task, including:
acquiring a plurality of delay tasks to be executed;
judging whether the first time value of the execution time point of each time delay task on the maximum time unit is the same as the current time value of the current time point on the maximum time unit or not;
Under the condition that the first time value is different from the current time value, the same time delay task of the first time value is distributed to a first storage space corresponding to the first time value;
determining a first earliest execution time point in a plurality of delay tasks of each first storage space, and determining a first time difference value between the current time point and each first earliest execution time point;
for each first time difference, taking a first storage space corresponding to the first time difference as a first target storage space under the condition that the first time difference is a first preset difference;
judging whether a second storage space corresponding to a second time value of the execution time of the delay task in a minimum time unit exists for each delay task of the first target storage space;
under the condition that a second storage space corresponding to each delay task exists, each delay task in the first target storage space is distributed to the corresponding second storage space;
and executing the delay task when the execution time point of any delay task in the second storage space is reached.
In an embodiment of the present application, the time units of the execution time point of each delay task further include a plurality of intermediate time units between the maximum time unit and the minimum time unit, and the method further includes: before judging whether a second storage space corresponding to a second time value of the execution time of the delay task on a minimum time unit exists, selecting the first intermediate time unit from a plurality of intermediate time units as a target time unit based on the arrangement sequence from large to small; judging whether a third storage space corresponding to a third time value of the execution time of the delay task in a target time unit exists for each delay task of the first target storage space; if a third storage space corresponding to each delay task exists, distributing each delay task to the corresponding third storage space; determining a second earliest execution time point in a plurality of delay tasks of each third storage space, and determining a second time difference value between the current time point and each second earliest execution time point; for each second time difference, taking a third storage space corresponding to the second time difference as a second target storage space under the condition that the second time difference is a second preset difference; selecting an intermediate time unit arranged after the target time unit from a plurality of intermediate time units based on the arrangement order from large to small as a new target time unit; returning to the step of judging whether a third storage space corresponding to a third time value of the execution time of the delay task on the target time unit exists for each delay task of the first target storage space until each delay task in the first target storage space is allocated to the storage space corresponding to the last intermediate time unit in the arrangement sequence, and taking the minimum time unit as the target time unit.
In an embodiment of the present application, the method further includes: in the absence of the first storage space or the second storage space corresponding to each of the delay tasks, a storage space corresponding to a time value of an execution time of each of the delay tasks in a corresponding time unit is created, and each of the delay tasks is allocated to the created corresponding storage space.
In an embodiment of the present application, the method further includes: after each delay task in the first target storage space is allocated to the corresponding second storage space, deleting the first target storage space; or deleting the second storage space after executing all delay tasks in the second storage space for each second storage space.
In an embodiment of the present application, the method further includes: aiming at each delay task in the second storage space, acquiring the execution state of the delay task in the process of executing the delay task; and under the condition that the execution state of the delay task is abnormal, performing compensation processing on the delay task with the abnormality so as to re-execute the delay task with the abnormality.
In an embodiment of the present application, the method further includes: after each delay task in the first target storage space is allocated to the corresponding second storage space, judging whether each delay task carries an allocation success identifier or not; and compensating the delayed tasks which do not carry the successful allocation identification under the condition that the delayed tasks do not carry the successful allocation identification aiming at each delayed task in the first target storage space so as to re-allocate the delayed tasks.
In an embodiment of the present application, the method further includes: judging whether a second storage space corresponding to a second time value of the execution time of the delay task exists or not under the condition that the first time value is the same as the current time value; under the condition that a second storage space corresponding to each time delay task exists, the time delay tasks with the same second time value are distributed to the corresponding second storage space; and executing the delay task when the execution time point of any delay task in the second storage space is reached.
In an embodiment of the present application, the method further includes: and under the condition that the first time value is different from the current time value and the first time value is the same as the current time value, each delay task in the first target storage space is distributed to the corresponding second storage space through a locking mechanism.
A second aspect of the present application provides a machine-readable storage medium having instructions stored thereon that, when executed by a processor, cause the processor to be configured to perform the method for processing latency tasks described above.
A third aspect of the present application provides a processor configured to perform the above-described method for processing latency tasks.
Through the technical scheme, the delay tasks can be stored in a grading manner according to the execution time points and the time units corresponding to the execution time points, the delay tasks are stored more reasonably and orderly, redundancy of storage space is avoided, the execution time points of the delay tasks can be accurately mastered, processing time delay and loss of the delay tasks generated when the delay tasks are more are avoided, and the processing efficiency of the delay tasks is greatly improved.
Additional features and advantages of embodiments of the present application will be set forth in the detailed description that follows.
Drawings
The accompanying drawings are included to provide a further understanding of embodiments of the present application and are incorporated in and constitute a part of this specification, illustrate embodiments of the present application and together with the description serve to explain, without limitation, the embodiments of the present application. In the drawings:
FIG. 1 schematically illustrates a flow diagram of a method for processing a delayed task according to an embodiment of the present application;
FIG. 2 schematically illustrates a flow diagram of a method for processing a delayed task according to another embodiment of the present application;
FIG. 3 schematically illustrates a block diagram of an apparatus for processing latency tasks according to an embodiment of the present application;
Fig. 4 schematically shows an internal structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the specific implementations described herein are only for illustrating and explaining the embodiments of the present application, and are not intended to limit the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
Fig. 1 schematically shows a flow diagram of a method for handling latency tasks according to an embodiment of the present application. As shown in fig. 1, in an embodiment of the present application, there is provided a method for processing a latency task, including the steps of:
step 101, obtaining a plurality of delay tasks to be executed.
Step 102, determining whether the first time value of the execution time point of each delay task in the maximum time unit is the same as the current time value of the current time point in the maximum time unit.
Step 103, under the condition that the first time value is different from the current time value, the time delay task with the same first time value is allocated to the first storage space corresponding to the first time value.
Step 104, determining a first earliest execution time point in the plurality of delay tasks of each first storage space, and determining a first time difference value between the current time point and each first earliest execution time point.
Step 105, regarding each first time difference, taking the first storage space corresponding to the first time difference as the first target storage space when the first time difference is a first preset difference.
And 106, judging whether a second storage space corresponding to a second time value of the execution time of the delay task in a minimum time unit exists for each delay task of the first target storage space.
In step 107, in the case that there is a second storage space corresponding to each delay task, each delay task in the first target storage space is allocated to the corresponding second storage space.
And step 108, executing the delay task when the execution time point of any delay task in the second storage space is reached.
A delayed task refers to a task that needs to be performed after a delay of a period of time. For example, if a consultation is required at a certain time point, but the user is not logged in at the time point, a time delay task can be adopted to realize time delay release of information. For another example, a red packet has been sent out but not received 24 hours, and the red packet refund service needs to be delayed. The processor may obtain a plurality of delay tasks to be performed. The processor may then determine whether the first time value of the execution time point of each of the delay tasks over the maximum time unit is the same as the current time value of the current time point over the maximum time unit. For example, the execution time point is 8 points for 5 minutes and 10 seconds (8:05:10), the maximum time unit is hours, and the time value on the maximum time unit is 8. For another example, if the execution time point is 9 points on day 20 of month 2 of quarter 1, the maximum time unit is hour, and the time value in the maximum time unit is 1.
If the first time value is different from the current time value, the corresponding delay task may not need to be processed immediately. At this time, the processor may allocate the same delay task as the first time value to the first storage space corresponding to the first time value. The first storage space refers to a storage space corresponding to the maximum time unit. If the maximum time unit is hours, the first storage space is the hour-level storage space. If the maximum time unit is quarter, the first storage space is the quarter storage space. For example, the execution time point of the delay task a is 8:05:10, the execution time point of the delay task B is 8:05:15, the execution time point of the delay task C is 8:06:10, the execution time point of the delay task D is 9:06:15, and the current time point is 7:58:05, so that the delay tasks A, B and C can be allocated to the hour storage space T1 corresponding to the 8 points, and the delay task D can be allocated to the hour storage space T2 corresponding to the 9 points.
Further, the processor may determine a first earliest point in time of execution of a plurality of delayed tasks for each first memory space and determine a first time difference between the current point in time and each first earliest point in time of execution. For each first time difference, in the case that the first time difference is a first preset difference, the processor may use a first storage space corresponding to the first time difference as a first target storage space. For example, the time points of the plurality of delay tasks are respectively: 7:00:00;8:00:00;9:00:00, if the current time point is 6:00:00, at this time, the delay tasks of 7:00:00, 8:00:00 and 9:00:00 are respectively stored in the hour-level storage space corresponding to 7 hours, the hour-level storage space corresponding to 8 hours and the hour-level storage space corresponding to 9 hours. If the first preset difference value is 1 hour, it can be determined that the time delay task in the hour-level storage space corresponding to 7 hours needs to be allocated at the current time point (6:00:00), and the hour-level storage space corresponding to 7 hours can be determined as the first target storage space. If the first preset difference is 40 minutes, it can be determined that the time delay task in the hour-level storage space corresponding to 7 hours does not need to be allocated at the current time point (6:00:00). Further, if the current time point is 6:20:00, it may be determined that the time delay task in the hour-level storage space corresponding to 7 hours needs to be allocated at this time, and it may be determined that the hour-level storage space corresponding to 7 hours is the first target storage space. For each of the latency tasks of the first target memory space, the processor may determine whether there is a second memory space corresponding to a second time value of an execution time of the latency task in a minimum time unit. The second storage space refers to a storage space corresponding to the minimum time unit. If the minimum time unit is second, the second storage space is the second storage space. And if the minimum time unit is a day, the second storage space is the top storage space.
In the case where there is a second storage space corresponding to each of the latency tasks, the processor may allocate each of the latency tasks in the first target storage space to the corresponding second storage space. The processor may execute the delay task in case the execution time point of any one of the delay tasks in the second memory space is reached. For example, for the hour level storage space T1, the earliest execution time point is 8:05:10, and if the time difference between the earliest execution time point and the current time point is a preset difference. At this time, the hour-level storage space T1 may be regarded as the first target storage space. If there are second-level storage spaces corresponding to 8:05:10 and 8:06:10 and second-level storage spaces corresponding to 8:05:15 and 9:06:15, the delay tasks A and C can be allocated to the second-level storage spaces corresponding to 10 seconds, and the delay task B can be allocated to the second-level storage spaces corresponding to 15 seconds. When 8:05:10 is reached, a latency task A may be performed. When 8:05:15 is reached, a latency task B may be performed. When 8:05:15 is reached, a latency task B may be performed. Upon reaching 8:06:10, latency task C may be performed.
Through the technical scheme, the delay tasks can be stored in a grading manner according to the execution time points and the time units corresponding to the execution time points, the delay tasks are stored more reasonably and orderly, redundancy of storage space is avoided, the execution time points of the delay tasks can be accurately mastered, processing time delay and loss of the delay tasks generated when the delay tasks are more are avoided, and the processing efficiency of the delay tasks is greatly improved.
In one embodiment, the time units of the execution time points of each delay task further comprise a plurality of intermediate time units between the maximum time unit and the minimum time unit, the method further comprising: before judging whether a second storage space corresponding to a second time value of the execution time of the delay task on a minimum time unit exists, selecting the first intermediate time unit from a plurality of intermediate time units as a target time unit based on the arrangement sequence from large to small; judging whether a third storage space corresponding to a third time value of the execution time of the delay task in a target time unit exists for each delay task of the first target storage space; if a third storage space corresponding to each delay task exists, distributing each delay task to the corresponding third storage space; determining a second earliest execution time point in a plurality of delay tasks of each third storage space, and determining a second time difference value between the current time point and each second earliest execution time point; for each second time difference, taking a third storage space corresponding to the second time difference as a second target storage space under the condition that the second time difference is a second preset difference; selecting an intermediate time unit arranged after the target time unit from a plurality of intermediate time units based on the arrangement order from large to small as a new target time unit; returning to the step of judging whether a third storage space corresponding to a third time value of the execution time of the delay task on the target time unit exists for each delay task of the first target storage space until each delay task in the first target storage space is allocated to the storage space corresponding to the last intermediate time unit in the arrangement sequence, and taking the minimum time unit as the target time unit.
The time units of the execution time points of each delay task further include a plurality of intermediate time units between the maximum time unit and the minimum time unit. For example, the execution time point is 9 points of the 20 th day of the 2 nd month of the 1 st quarter, the maximum time unit is the quarter, the middle time units are the month and the day, and the minimum time unit is the hour. Before judging whether a second storage space corresponding to a second time value of the execution time of the delay task on the minimum time unit exists, selecting the first intermediate time unit from a plurality of intermediate time units as a target time unit based on the arrangement order from large to small.
For each of the latency tasks of the first target memory space, the processor may determine whether there is a third memory space corresponding to a third time value of an execution time of the latency task in a target time unit. In the case where there is a third memory space corresponding to each of the latency tasks, the processor may allocate each of the latency tasks to the corresponding third memory space. The third storage space refers to a storage space corresponding to the target time unit. If the target time unit is a month, the third storage space is a month-level storage space. The processor may determine a second earliest point in time of execution of a plurality of delayed tasks for each third memory space and determine a second time difference between the current point in time and each second earliest point in time of execution. For each second time difference, the processor may use a third storage space corresponding to the second time difference as the second target storage space in the case where the second time difference is a second preset difference.
The processor may select an intermediate time unit arranged after the target time unit from the plurality of intermediate time units as a new target time unit based on the arrangement order from large to small. Then, the processor may return to the step of determining, for each of the delay tasks of the first target storage space, whether there is a third storage space corresponding to a third time value of the execution time of the delay task in the target time unit, until each of the delay tasks in the first target storage space is allocated to the storage space corresponding to the last intermediate time unit in the arrangement sequence, and the minimum time unit is taken as the target time unit.
For example, the first storage space stores a delay task H, I, J and K, the delay task H is 9 points on day 20 of month 2 of quarter 2, the delay task I is 9 points on day 20 of month 3 of quarter 2, the delay task J is 7 points on day 15 of month 3 of quarter 2, the delay task K is 8 points on day 10 of month 1 of quarter 2, the delay task K is arranged in the first middle time unit of month, and there is a third storage space corresponding to month 1 of quarter 2, month 2 of quarter 2, and month 3 of quarter 2, the delay task K is stored in the month storage space corresponding to month 1, the delay task H is stored in the month storage space corresponding to month 2, and the delay tasks I and J are stored in the month storage space corresponding to month 3. The delayed tasks in the lunar storage may then be further allocated. For example, for the month-level storage space corresponding to 3 months, if the condition is satisfied, the delay task I may be further allocated to the day-level storage space corresponding to 20 days, the delay task J may be allocated to the day-level storage space corresponding to 15 days, until the delay task I is allocated to the day-level storage space corresponding to 20 days, the delay task J may be allocated to the hour-level storage space corresponding to 9 points, and the delay task J may be allocated to the hour-level storage space corresponding to 7 points.
In one embodiment, the method further comprises: in the absence of the first storage space or the second storage space corresponding to each of the delay tasks, a storage space corresponding to a time value of an execution time of each of the delay tasks in a corresponding time unit is created, and each of the delay tasks is allocated to the created corresponding storage space.
In the case where there is no first storage space or second storage space corresponding to each of the delay tasks, the processor may create a storage space corresponding to a time value of an execution time of each of the delay tasks in a corresponding time unit, and may allocate each of the delay tasks to the created corresponding storage space.
In one embodiment, the method further comprises: after each delay task in the first target storage space is allocated to the corresponding second storage space, deleting the first target storage space; or deleting the second storage space after executing all delay tasks in the second storage space for each second storage space.
After assigning each latency task in the first target memory space to a corresponding second memory space, the processor may delete the first target memory space. For each second memory space, the processor may delete the second memory space after performing all of the latency tasks in the second memory space.
According to the scheme, the corresponding storage space can be created when the corresponding storage space does not exist, the corresponding storage space can be deleted after the delay tasks in the storage space are distributed to other storage spaces, and the corresponding storage space can be deleted after all the delay tasks in the corresponding storage space are executed, so that redundancy of the storage space is avoided.
In one embodiment, the method further comprises: aiming at each delay task in the second storage space, acquiring the execution state of the delay task in the process of executing the delay task; and under the condition that the execution state of the delay task is abnormal, performing compensation processing on the delay task with the abnormality so as to re-execute the delay task with the abnormality.
For each delay task in the second storage space, the processor may acquire an execution state of the delay task in the process of executing the delay task. Wherein the execution state may include exception and normal. And under the condition that the execution state of the delay task is abnormal, the processor can carry out compensation processing on the delay task with the abnormality so as to re-execute the delay task with the abnormality. Specifically, the additional thread may be turned on to perform the delayed task in the presence of an exception. And under the condition that the execution state of the delay task is normal, the task completion time of the delay task can be updated.
In one embodiment, the method further comprises: after each delay task in the first target storage space is allocated to the corresponding second storage space, judging whether each delay task carries an allocation success identifier or not; and compensating the delayed tasks which do not carry the successful allocation identification under the condition that the delayed tasks do not carry the successful allocation identification aiming at each delayed task in the first target storage space so as to re-allocate the delayed tasks.
After each delay task in the first target storage space is allocated to the corresponding second storage space, the processor may determine whether each delay task carries an allocation success identifier. Aiming at each time delay task in the first target storage space, the processor can compensate the time delay task which does not carry the successful allocation mark under the condition that the time delay task does not carry the successful allocation mark so as to allocate the time delay task again. According to the scheme, whether the delay task needs to be compensated is determined by judging whether the delay task carries the successful allocation identifier, so that the loss of the delay task can be prevented.
In one embodiment, the method further comprises: judging whether a second storage space corresponding to a second time value of the execution time of the delay task exists or not under the condition that the first time value is the same as the current time value; under the condition that a second storage space corresponding to each time delay task exists, the time delay tasks with the same second time value are distributed to the corresponding second storage space; and executing the delay task when the execution time point of any delay task in the second storage space is reached.
If the first time value is the same as the current time value, the corresponding delay task may need to be processed immediately. At this time, the processor may determine whether there is a second memory space corresponding to a second time value of the execution time of the delay task. In the case where there is a second storage space corresponding to each of the delay tasks, the processor may allocate the delay tasks having the same second time value to the corresponding second storage space. For example, the execution time point of the delay task a is 8:05:10, the execution time point of the delay task B is 8:05:15, the execution time point of the delay task C is 8:06:10, the execution time point of the delay task D is 9:06:15, and the current time point is 8:00:00, the delay tasks a and C may be allocated to a second storage space corresponding to 10 seconds, and the delay tasks B and D may be allocated to a second storage space corresponding to 15 seconds. The processor may execute the delay task in case the execution time point of any one of the delay tasks in the second memory space is reached.
In one embodiment, the method further comprises: and under the condition that the first time value is different from the current time value and the first time value is the same as the current time value, each delay task in the first target storage space is distributed to the corresponding second storage space through a locking mechanism.
If the first time value is different from the current time value and the first time value is the same as the current time value, it is indicated that each delay task in the first target storage space may be allocated to the corresponding second storage space (splitting the delay task in the first target storage space to the second storage space) and the delay task with the same second time value is allocated to the corresponding second storage space (allocating the acquired delay task to the second storage space), at this time, the processor may allocate each delay task in the first target storage space to the corresponding second storage space through a locking mechanism to prevent the delay task from being covered. That is, when there is data writing to the same second storage space, i.e. concurrent, a locking mechanism may be added, and a thread that has taken a lock may write its data, and a thread that has not taken a lock may wait for it to reacquire.
In one embodiment, as shown in FIG. 2, a flow diagram of another method for processing a delayed task is provided.
First, a task, i.e., a latency task, may be generated. Then, it can be determined whether the task execution time of the task is within the same hour or the next hour as the current time. If the task execution time is within the same hour or the next hour as the current time, the task may be stored in a storage space created in seconds and waiting for execution. Wherein tasks executed in the same second are stored in the same storage space. The task scheduler may pull the task in seconds to the corresponding memory space and submit the task to the task executor and free the memory space. Thereafter, the task may be performed. If the task execution time is not within the same hour or the next hour with the current time, the task can be stored in a storage space created by the hours, and the task is waited for to be decomposed. Wherein tasks executed in the same hour are stored in the same storage space. The task splitting service can execute the task to be executed in the next hour in terms of hours and split the task into corresponding storage spaces in terms of seconds. Then, the current storage space can be released, namely the split storage space created in hours is released. For example, the current time is 3 points 00 minutes, and a certain delay task is executed by 4 points 12 minutes 10 seconds, which starts to store in the hour-level storage space corresponding to the 4 points. If the storage space needs to be decomposed, the execution of the storage space can be split into a newly created storage space of 4 points and 12 minutes and 10 seconds, and the storage space of 4 points corresponding to the hours is released. Further, when there is an abnormality in the split task execution performed at the hour level, a detection and compensation service can be provided. Detection and compensation services may also be provided when there is an exception to the execution of the second level task scheduling service. A locking mechanism may be added when there is a write of data to the same second level of memory space. In order to avoid backlog of delay tasks in high concurrency scenarios, an asynchronous scheme may be employed to execute tasks.
Through the technical scheme, the delay tasks can be stored in a grading manner according to the execution time points and the time units corresponding to the execution time points, the delay tasks are stored more reasonably and orderly, redundancy of storage space is avoided, the execution time points of the delay tasks can be accurately mastered, processing time delay and loss of the delay tasks generated when the delay tasks are more are avoided, and the processing efficiency of the delay tasks is greatly improved. Meanwhile, the corresponding storage space can be created when the corresponding storage space does not exist, the corresponding storage space can be deleted after the delay tasks in the storage space are distributed to other storage spaces, and the corresponding storage space can be deleted after all the delay tasks in the corresponding storage space are executed, so that redundancy of the storage space is avoided. Whether the delay task needs to be compensated is determined by judging whether the delay task carries an allocation success identifier, so that the loss of the delay task can be prevented.
Fig. 1 and 2 are flow diagrams of a method for processing a delayed task in one embodiment. It should be understood that, although the steps in the flowcharts of fig. 1 and 2 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 and 2 may include multiple sub-steps or phases that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or phases are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of other steps or other steps.
In one embodiment, as shown in fig. 3, an apparatus 300 for processing delayed tasks is provided, comprising a task storage module 301, a task splitting module 302, a task scheduling module 303, and a task execution module 304, wherein:
a task storage module 301, configured to obtain a plurality of delay tasks to be executed; judging whether the first time value of the execution time point of each time delay task on the maximum time unit is the same as the current time value of the current time point on the maximum time unit or not; and under the condition that the first time value is different from the current time value, the time delay task with the same first time value is distributed to a first storage space corresponding to the first time value.
A task splitting module 302, configured to determine a first earliest execution time point in the plurality of delayed tasks in each first storage space, and determine a first time difference value between the current time point and each first earliest execution time point; for each first time difference, taking a first storage space corresponding to the first time difference as a first target storage space under the condition that the first time difference is a first preset difference; judging whether a second storage space corresponding to a second time value of the execution time of the delay task in a minimum time unit exists for each delay task of the first target storage space; and in the case that the second storage space corresponding to each delay task exists, distributing each delay task in the first target storage space to the corresponding second storage space.
The task scheduling module 303 is configured to send the delayed task to the task execution module when the execution time point of any one of the delayed tasks in the second storage space is reached.
The task execution module 304 is configured to receive and execute a delay task reaching an execution time point of any one delay task in the second storage space.
A delayed task refers to a task that needs to be performed after a delay of a period of time. For example, if a consultation is required at a certain time point, but the user is not logged in at the time point, a time delay task can be adopted to realize time delay release of information. For another example, a red packet has been sent out but not received 24 hours, and the red packet refund service needs to be delayed. The task storage module 301 may obtain a plurality of delay tasks to be executed. After that, the task storage module 301 may determine whether the first time value of the execution time point of each delay task in the maximum time unit is the same as the current time value of the current time point in the maximum time unit. For example, the execution time point is 8 points for 5 minutes and 10 seconds (8:05:10), the maximum time unit is hours, and the time value on the maximum time unit is 8. For another example, if the execution time point is 9 points on day 20 of month 2 of quarter 1, the maximum time unit is hour, and the time value in the maximum time unit is 1.
If the first time value is different from the current time value, the corresponding delay task may not need to be processed immediately. At this time, the task storage module 301 may allocate the same latency task as the first time value to the first storage space corresponding to the first time value. The first storage space refers to a storage space corresponding to the maximum time unit. If the maximum time unit is hours, the first storage space is the hour-level storage space. If the maximum time unit is quarter, the first storage space is the quarter storage space. For example, the execution time point of the delay task a is 8:05:10, the execution time point of the delay task B is 8:05:15, the execution time point of the delay task C is 8:06:10, the execution time point of the delay task D is 9:06:15, and the current time point is 7:58:05, so that the delay tasks A, B and C can be allocated to the hour storage space T1 corresponding to the 8 points, and the delay task D can be allocated to the hour storage space T2 corresponding to the 9 points.
Further, the task splitting module 302 may determine a first earliest point in time of execution of a plurality of delayed tasks for each first storage space and determine a first time difference between the current point in time and each first earliest point in time of execution. For each first time difference, in the case where the first time difference is a first preset difference, the task splitting module 302 may use a first storage space corresponding to the first time difference as the first target storage space. For each of the latency tasks of the first target storage space, the task splitting module 302 may determine whether there is a second storage space corresponding to a second time value of an execution time of the latency task in a minimum time unit. The second storage space refers to a storage space corresponding to the minimum time unit. If the minimum time unit is second, the second storage space is the second storage space. And if the minimum time unit is a day, the second storage space is the top storage space. In the event that there is a second storage space corresponding to each latency task, the task splitting module 302 may allocate each latency task in the first target storage space to the corresponding second storage space.
The task scheduling module 303 sends the delayed task to the task executing module 304 when reaching the execution time point of any one of the delayed tasks in the second storage space. The task execution module 304 is configured to receive and execute a delay task reaching an execution time point of any one delay task in the second storage space. For example, for the hour level storage space T1, the earliest execution time point is 8:05:10, and if the time difference between the earliest execution time point and the current time point is a preset difference. At this time, the hour-level storage space T1 may be regarded as the first target storage space. If the second-level storage space corresponding to 10 seconds and the second-level storage space corresponding to 15 seconds exist, the delay tasks A and C can be distributed to the second-level storage space corresponding to 10 seconds, and the delay task B can be distributed to the second-level storage space corresponding to 15 seconds. When 8:05:10 is reached, a latency task A may be performed. When 8:05:15 is reached, a latency task B may be performed. When 8:05:15 is reached, a latency task B may be performed. Upon reaching 8:06:10, latency task C may be performed.
Through the technical scheme, the delay tasks can be stored in a grading manner according to the execution time points and the time units corresponding to the execution time points, the delay tasks are stored more reasonably and orderly, redundancy of storage space is avoided, the execution time points of the delay tasks can be accurately mastered, processing time delay and loss of the delay tasks generated when the delay tasks are more are avoided, and the processing efficiency of the delay tasks is greatly improved.
In one embodiment, the task storage module 301 is further configured to: in the absence of the first storage space or the second storage space corresponding to each of the delay tasks, a storage space corresponding to a time value of an execution time of each of the delay tasks in a corresponding time unit is created, and each of the delay tasks is allocated to the created corresponding storage space.
In one embodiment, the task storage module 301 is further configured to: after each delay task in the first target storage space is allocated to the corresponding second storage space, deleting the first target storage space; or deleting the second storage space after executing all delay tasks in the second storage space for each second storage space.
In one embodiment, the task storage module 301 is further configured to: judging whether a second storage space corresponding to a second time value of the execution time of the delay task exists or not under the condition that the first time value is the same as the current time value; under the condition that a second storage space corresponding to each time delay task exists, the time delay tasks with the same second time value are distributed to the corresponding second storage space; and executing the delay task when the execution time point of any delay task in the second storage space is reached.
In one embodiment, the task splitting module 302 is further configured to: before judging whether a second storage space corresponding to a second time value of the execution time of the delay task on a minimum time unit exists, selecting the first intermediate time unit from a plurality of intermediate time units as a target time unit based on the arrangement sequence from large to small; judging whether a third storage space corresponding to a third time value of the execution time of the delay task in a target time unit exists for each delay task of the first target storage space; if a third storage space corresponding to each delay task exists, distributing each delay task to the corresponding third storage space; determining a second earliest execution time point in a plurality of delay tasks of each third storage space, and determining a second time difference value between the current time point and each second earliest execution time point; for each second time difference, taking a third storage space corresponding to the second time difference as a second target storage space under the condition that the second time difference is a second preset difference; selecting an intermediate time unit arranged after the target time unit from a plurality of intermediate time units based on the arrangement order from large to small as a new target time unit; returning to the step of judging whether a third storage space corresponding to a third time value of the execution time of the delay task on the target time unit exists for each delay task of the first target storage space until each delay task in the first target storage space is allocated to the storage space corresponding to the last intermediate time unit in the arrangement sequence, and taking the minimum time unit as the target time unit.
In one embodiment, the task splitting module 302 is further configured to: and under the condition that the first time value is different from the current time value and the first time value is the same as the current time value, each delay task in the first target storage space is distributed to the corresponding second storage space through a locking mechanism.
In one embodiment, the apparatus 300 for processing the delayed tasks further comprises: monitoring compensation module 305, monitoring compensation module 305 is used for: aiming at each delay task in the second storage space, acquiring the execution state of the delay task in the process of executing the delay task; and under the condition that the execution state of the delay task is abnormal, performing compensation processing on the delay task with the abnormality so as to re-execute the delay task with the abnormality.
In one embodiment, the monitor compensation module 305 is further configured to: after each delay task in the first target storage space is allocated to the corresponding second storage space, judging whether each delay task carries an allocation success identifier or not; and compensating the delayed tasks which do not carry the successful allocation identification under the condition that the delayed tasks do not carry the successful allocation identification aiming at each delayed task in the first target storage space so as to re-allocate the delayed tasks.
The device for processing the delay task comprises a processor and a memory, wherein the task storage module 301, the task splitting module 302, the task scheduling module 303, the task execution module 304, the monitoring compensation module 305 and the like are all stored in the memory as program units, and the processor executes the program modules stored in the memory to realize corresponding functions.
The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory. The kernel may be provided with one or more kernel parameters to implement a method for handling latency tasks.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
In one embodiment, a storage medium is provided having a program stored thereon that when executed by a processor implements the above-described method for handling latency tasks.
In one embodiment, a processor is provided for running a program, wherein the program, when running, performs the method for processing latency tasks described above.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 4. The computer device includes a processor a01, a network interface a02, a memory (not shown) and a database (not shown) connected by a system bus. Wherein the processor a01 of the computer device is adapted to provide computing and control capabilities. The memory of the computer device includes internal memory a03 and nonvolatile storage medium a04. The nonvolatile storage medium a04 stores an operating system B01, a computer program B02, and a database (not shown in the figure). The internal memory a03 provides an environment for the operation of the operating system B01 and the computer program B02 in the nonvolatile storage medium a04. The database of the computer device is used for storing data such as execution time points of the delay tasks. The network interface a02 of the computer device is used for communication with an external terminal through a network connection. The computer program B02, when executed by the processor a01, implements a method for handling latency tasks.
Those skilled in the art will appreciate that the structures shown in FIG. 4 are block diagrams only and do not constitute a limitation of the computer device on which the present aspects apply, and that a particular computer device may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
The embodiment of the application provides equipment, which comprises a processor, a memory and a program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the following steps: acquiring a plurality of delay tasks to be executed; judging whether the first time value of the execution time point of each time delay task on the maximum time unit is the same as the current time value of the current time point on the maximum time unit or not; under the condition that the first time value is different from the current time value, the same time delay task of the first time value is distributed to a first storage space corresponding to the first time value; determining a first earliest execution time point in a plurality of delay tasks of each first storage space, and determining a first time difference value between the current time point and each first earliest execution time point; for each first time difference, taking a first storage space corresponding to the first time difference as a first target storage space under the condition that the first time difference is a first preset difference; judging whether a second storage space corresponding to a second time value of the execution time of the delay task in a minimum time unit exists for each delay task of the first target storage space; under the condition that a second storage space corresponding to each delay task exists, each delay task in the first target storage space is distributed to the corresponding second storage space; and executing the delay task when the execution time point of any delay task in the second storage space is reached.
In one embodiment, the time units of the execution time points of each delay task further comprise a plurality of intermediate time units between the maximum time unit and the minimum time unit, the method further comprising: before judging whether a second storage space corresponding to a second time value of the execution time of the delay task on a minimum time unit exists, selecting the first intermediate time unit from a plurality of intermediate time units as a target time unit based on the arrangement sequence from large to small; judging whether a third storage space corresponding to a third time value of the execution time of the delay task in a target time unit exists for each delay task of the first target storage space; if a third storage space corresponding to each delay task exists, distributing each delay task to the corresponding third storage space; determining a second earliest execution time point in a plurality of delay tasks of each third storage space, and determining a second time difference value between the current time point and each second earliest execution time point; for each second time difference, taking a third storage space corresponding to the second time difference as a second target storage space under the condition that the second time difference is a second preset difference; selecting an intermediate time unit arranged after the target time unit from a plurality of intermediate time units based on the arrangement order from large to small as a new target time unit; returning to the step of judging whether a third storage space corresponding to a third time value of the execution time of the delay task on the target time unit exists for each delay task of the first target storage space until each delay task in the first target storage space is allocated to the storage space corresponding to the last intermediate time unit in the arrangement sequence, and taking the minimum time unit as the target time unit.
In one embodiment, the method further comprises: in the absence of the first storage space or the second storage space corresponding to each of the delay tasks, a storage space corresponding to a time value of an execution time of each of the delay tasks in a corresponding time unit is created, and each of the delay tasks is allocated to the created corresponding storage space.
In one embodiment, the method further comprises: after each delay task in the first target storage space is allocated to the corresponding second storage space, deleting the first target storage space; or deleting the second storage space after executing all delay tasks in the second storage space for each second storage space.
In one embodiment, the method further comprises: aiming at each delay task in the second storage space, acquiring the execution state of the delay task in the process of executing the delay task; and under the condition that the execution state of the delay task is abnormal, performing compensation processing on the delay task with the abnormality so as to re-execute the delay task with the abnormality.
In one embodiment, the method further comprises: after each delay task in the first target storage space is allocated to the corresponding second storage space, judging whether each delay task carries an allocation success identifier or not; and compensating the delayed tasks which do not carry the successful allocation identification under the condition that the delayed tasks do not carry the successful allocation identification aiming at each delayed task in the first target storage space so as to re-allocate the delayed tasks.
In one embodiment, the method further comprises: judging whether a second storage space corresponding to a second time value of the execution time of the delay task exists or not under the condition that the first time value is the same as the current time value; under the condition that a second storage space corresponding to each time delay task exists, the time delay tasks with the same second time value are distributed to the corresponding second storage space; and executing the delay task when the execution time point of any delay task in the second storage space is reached.
In one embodiment, the method further comprises: and under the condition that the first time value is different from the current time value and the first time value is the same as the current time value, each delay task in the first target storage space is distributed to the corresponding second storage space through a locking mechanism.
The present application also provides a computer program product adapted to perform a program which, when executed on a data processing apparatus, is initialized with method steps for handling latency tasks.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.
Claims (10)
1. A method for processing a delayed task, the method comprising:
acquiring a plurality of delay tasks to be executed;
judging whether the first time value of the execution time point of each time delay task on the maximum time unit is the same as the current time value of the current time point on the maximum time unit or not;
Under the condition that the first time value is different from the current time value, the delay task with the same first time value is distributed to a first storage space corresponding to the first time value;
determining a first earliest execution time point in a plurality of delay tasks of each first storage space, and determining a first time difference value between the current time point and each first earliest execution time point;
for each first time difference value, taking a first storage space corresponding to the first time difference value as a first target storage space under the condition that the first time difference value is a first preset difference value;
judging whether a second storage space corresponding to a second time value of the execution time of the delay task on a minimum time unit exists for each delay task of the first target storage space;
under the condition that a second storage space corresponding to each delay task exists, each delay task in the first target storage space is distributed to the corresponding second storage space;
and executing the delay task under the condition that the execution time point of any delay task in the second storage space is reached.
2. The method for processing delay tasks of claim 1 wherein the time units of execution time points for each delay task further comprise a plurality of intermediate time units between the maximum time unit and the minimum time unit, the method further comprising:
before judging whether a second storage space corresponding to a second time value of the execution time of the delay task on a minimum time unit exists, selecting an intermediate time unit arranged in the first time unit from the plurality of intermediate time units as a target time unit based on the arrangement sequence from large to small;
judging whether a third storage space corresponding to a third time value of the execution time of the delay task on the target time unit exists for each delay task of the first target storage space;
if a third storage space corresponding to each delay task exists, distributing each delay task to the corresponding third storage space;
determining a second earliest execution time point in a plurality of delay tasks of each third storage space, and determining a second time difference value between the current time point and each second earliest execution time point;
For each second time difference value, taking a third storage space corresponding to the second time difference value as a second target storage space when the second time difference value is a second preset difference value;
selecting an intermediate time unit arranged after the target time unit from the plurality of intermediate time units as a new target time unit based on the arrangement order from large to small;
returning to the step of judging whether a third storage space corresponding to a third time value of the execution time of the delay task on the target time unit exists for each delay task of the first target storage space until each delay task in the first target storage space is allocated to the storage space corresponding to the last intermediate time unit in the arrangement sequence, and taking the minimum time unit as a target time unit.
3. The method for processing a deferred task as recited in claim 1, wherein the method further comprises:
in the absence of the first storage space or the second storage space corresponding to each of the delay tasks, a storage space corresponding to a time value of an execution time of each of the delay tasks in a corresponding time unit is created, and each of the delay tasks is allocated to the created corresponding storage space.
4. The method for processing a deferred task as recited in claim 1, wherein the method further comprises:
deleting the first target storage space after each delay task in the first target storage space is allocated to a corresponding second storage space; or (b)
And deleting each second storage space after executing all delay tasks in the second storage space.
5. The method for processing a deferred task as recited in claim 1, wherein the method further comprises:
aiming at each time delay task in the second storage space, acquiring the execution state of the time delay task in the process of executing the time delay task;
and under the condition that the execution state of the delay task is abnormal, carrying out compensation processing on the delay task with the abnormality so as to re-execute the delay task with the abnormality.
6. The method for processing a deferred task as recited in claim 1, wherein the method further comprises:
after each delay task in the first target storage space is allocated to a corresponding second storage space, judging whether each delay task carries an allocation success identifier or not;
And aiming at each delay task in the first target storage space, under the condition that the delay task does not carry the successful allocation identifier, compensating the delay task which does not carry the successful allocation identifier so as to re-allocate the delay task.
7. The method for processing a deferred task as recited in claim 1, wherein the method further comprises:
judging whether a second storage space corresponding to a second time value of the execution time of the delay task exists or not under the condition that the first time value is the same as the current time value;
under the condition that a second storage space corresponding to each time delay task exists, the time delay tasks with the same second time value are distributed to the corresponding second storage space;
and executing the delay task under the condition that the execution time point of any delay task in the second storage space is reached.
8. The method for processing a deferred task as recited in claim 7, wherein the method further comprises:
and when the first time value is different from the current time value and the first time value is the same as the current time value, distributing each delay task in the first target storage space to a corresponding second storage space through a locking mechanism.
9. A machine-readable storage medium having instructions stored thereon, which when executed by a processor cause the processor to be configured to perform the method for processing latency tasks according to any of claims 1 to 8.
10. A processor configured to perform the method for processing latency tasks according to any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311276902.3A CN117492947A (en) | 2023-09-28 | 2023-09-28 | Method for processing delay task, storage medium and processor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311276902.3A CN117492947A (en) | 2023-09-28 | 2023-09-28 | Method for processing delay task, storage medium and processor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117492947A true CN117492947A (en) | 2024-02-02 |
Family
ID=89675246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311276902.3A Pending CN117492947A (en) | 2023-09-28 | 2023-09-28 | Method for processing delay task, storage medium and processor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117492947A (en) |
-
2023
- 2023-09-28 CN CN202311276902.3A patent/CN117492947A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9778962B2 (en) | Method for minimizing lock contention among threads when tasks are distributed in multithreaded system and apparatus using the same | |
US10200295B1 (en) | Client selection in a distributed strict queue | |
US20150036503A1 (en) | Rate Control By Token Buckets | |
JPWO2017170470A1 (en) | Network function virtualization management orchestration apparatus, method and program | |
CN106708608B (en) | Distributed lock service method, acquisition method and corresponding device | |
CN112346829A (en) | Method and equipment for task scheduling | |
CN110427258B (en) | Resource scheduling control method and device based on cloud platform | |
CN111709723B (en) | RPA business process intelligent processing method, device, computer equipment and storage medium | |
CN109634730A (en) | Method for scheduling task, device, computer equipment and storage medium | |
US9501485B2 (en) | Methods for facilitating batch analytics on archived data and devices thereof | |
CN110764930B (en) | Request or response processing method and device based on message mode | |
CN112650449B (en) | Method and system for releasing cache space, electronic device and storage medium | |
CN113946427A (en) | Task processing method, processor and storage medium for multi-operating system | |
CN117492947A (en) | Method for processing delay task, storage medium and processor | |
CN110968406B (en) | Method, device, storage medium and processor for processing task | |
CN110663051B (en) | Computerized system and method for resolving cross-vehicle dependencies in vehicle dispatch | |
CN110011832B (en) | Configuration issuing method and device for planned tasks | |
CN111597392B (en) | Index processing method, device, equipment and storage medium | |
KR102177440B1 (en) | Method and Device for Processing Big Data | |
CN114201284A (en) | Timed task management method and system | |
CN112948501A (en) | Data analysis method, device and system | |
CN117149455A (en) | Method for processing business, storage medium, processor and business processing system | |
CN107196873B (en) | Service request sending method and device for distributed cluster | |
CN111752701B (en) | System cluster and resource scheduling method thereof | |
CN114706671B (en) | Multiprocessor scheduling optimization method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |