CN114265845A - Processing method and device of delay task and electronic equipment - Google Patents

Processing method and device of delay task and electronic equipment Download PDF

Info

Publication number
CN114265845A
CN114265845A CN202111611399.3A CN202111611399A CN114265845A CN 114265845 A CN114265845 A CN 114265845A CN 202111611399 A CN202111611399 A CN 202111611399A CN 114265845 A CN114265845 A CN 114265845A
Authority
CN
China
Prior art keywords
task
delay
time
processing
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111611399.3A
Other languages
Chinese (zh)
Inventor
窦健
凌鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qiyue Information Technology Co Ltd
Original Assignee
Shanghai Qiyue Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qiyue Information Technology Co Ltd filed Critical Shanghai Qiyue Information Technology Co Ltd
Priority to CN202111611399.3A priority Critical patent/CN114265845A/en
Publication of CN114265845A publication Critical patent/CN114265845A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to the technical field of computers, in particular to a method and a device for processing a delay task and electronic equipment, wherein the method comprises the following steps: acquiring a first delay task at least comprising delay time, a task body and a task ID; screening the first delay task to obtain a second delay task; storing the task ID corresponding to the second delay task into a time wheel with a cycle period as preset time according to the corresponding delay time, and encapsulating the task IDs corresponding to all the second delay tasks under each scale of the time wheel to obtain a task group ID; when the time wheel passes through a scale, matching the task group ID corresponding to the scale; and reading the task body corresponding to the task group ID, and uniformly distributing the task body corresponding to the task group ID to each node of a task processing server for task processing. According to the invention, the time wheel is controlled, so that the batch processing of the delay tasks in a lock-free mode is realized, and the processing efficiency of the delay tasks is improved.

Description

Processing method and device of delay task and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for processing a delay task and electronic equipment.
Background
When executing business logic, asynchronous scenarios are often encountered, and these asynchronous scenarios do not need to obtain results immediately at that time, but need to be executed after a certain time, for example: the Taobao order is automatically cancelled if the Taobao order is not paid after 1 hour, the piece-piecing activity is expired and the piece-piecing is not successfully returned, and the like.
In the case of such a large number of delay scenarios, when the data amount is not large, a timed task may be used to poll the database, and the expired task is taken out for execution, but if in a large data amount distributed scenario, such a policy is likely to generate system exceptions such as OOM (out of memory), and thus a method for solving the above problems is needed.
Disclosure of Invention
The invention provides a processing method and device of a delay task and electronic equipment, which are used for realizing the large-batch processing of the delay task and improving the processing efficiency of the delay task.
An embodiment of the present specification provides a method for processing a delay task, including:
acquiring a first delay task at least comprising delay time, a task body and a task ID;
screening the first delay task to obtain a second delay task;
storing the task ID corresponding to the second delay task into a time wheel with a cycle period as preset time according to the corresponding delay time, and encapsulating the task IDs corresponding to all the second delay tasks under each scale of the time wheel to obtain a task group ID;
when the time wheel passes through a scale, matching the task group ID corresponding to the scale;
and reading the task body corresponding to the task group ID, and uniformly distributing the task body corresponding to the task group ID to each node of a task processing server for task processing.
Preferably, after the first delay task is acquired, including,
and performing library and table division on the first delay task in a macro time increment and local hash mode.
Preferably, the screening the first delay task includes:
checking the first delay task;
and when the first delay task is successfully verified, screening the delay task of which the delay time is less than the preloading time by using the time wheel node to obtain a second delay task.
Preferably, before storing the task ID corresponding to the second delay task into the time wheel with the cycle period as the preset time according to the corresponding delay time, the method includes:
and caching the task body and the task ID corresponding to the second delay task into a delay task database.
Preferably, the matching of the task group ID corresponding to the scale includes:
putting the task group ID under the position index of the corresponding time wheel scale according to the delay time;
and matching the task group ID corresponding to the scale according to the position index of the time wheel scale.
Preferably, the uniformly distributing the task bodies to each node of the task processing server for task processing includes:
and uniformly distributing the task bodies to each node of a task processing server by using message middleware for task processing, wherein the message middleware comprises Rocktmq and Kafka.
Preferably, the preset time is 60s, the time wheel has 300 frames, and the rotation time of the time wheel is 200 ms/frame.
Preferably, the time wheel node is responsible for the screening of a plurality of tables, each table being operable by only one time wheel node.
An embodiment of this specification further provides a processing apparatus for a delayed task, including:
the task obtaining module is used for obtaining a first delay task at least comprising delay time, a task body and a task ID;
the task screening module is used for screening the first delay task to obtain a second delay task;
the encapsulation module is used for storing the task ID corresponding to the second delay task into a time wheel with a cycle period as preset time according to the corresponding delay time, and encapsulating all the task IDs corresponding to the second delay tasks under each scale of the time wheel to obtain a task group ID;
the matching module is used for matching the task group ID corresponding to a scale when the time wheel passes through each scale;
and the task processing module is used for reading the task bodies corresponding to the task group IDs and uniformly distributing the task bodies corresponding to the task group IDs to each node of the task processing server for task processing.
An electronic device, wherein the electronic device comprises:
a processor and a memory storing a computer executable program which, when executed, causes the processor to perform any of the methods described above.
A computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method of any of the above.
According to the invention, the time wheel is controlled, so that the batch processing of the delay tasks in a lock-free mode is realized, and the processing efficiency of the delay tasks is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal for processing a delay task according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a method for processing a delayed task according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a delayed task generation method provided by an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a delayed task preloading procedure according to an embodiment of the present disclosure;
fig. 5 is a flow chart of time wheel rotation provided in the embodiments of the present disclosure;
fig. 6 is an overall framework diagram of a delayed task processing provided by an embodiment of the present specification;
fig. 7 is a schematic structural diagram of a processing apparatus for a delayed task according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present invention will now be described more fully with reference to the accompanying drawings. The exemplary embodiments, however, may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the invention to those skilled in the art. The same reference numerals denote the same or similar elements, components, or parts in the drawings, and thus their repetitive description will be omitted.
Features, structures, characteristics or other details described in a particular embodiment do not preclude the fact that the features, structures, characteristics or other details may be combined in a suitable manner in one or more other embodiments in accordance with the technical idea of the invention.
In describing particular embodiments, the present invention has been described with reference to features, structures, characteristics or other details that are within the purview of one skilled in the art to provide a thorough understanding of the embodiments. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific features, structures, characteristics, or other details.
The diagrams depicted in the figures are exemplary only, and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The term "and/or" and/or "includes all combinations of any one or more of the associated listed items.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal, a server or a similar operation device. Taking the example of operating on a mobile terminal, fig. 1 is a block diagram of a hardware structure of a mobile terminal for processing a delay task. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used for storing a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the page layout method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
Referring to fig. 2, a schematic diagram of a processing method of a delayed task according to an embodiment of the present disclosure includes:
s201: acquiring a first delay task at least comprising delay time, a task body and a task ID;
in a preferred embodiment of the invention, a first delay task is obtained, the delay task is sent by an order service of the peripheral, and the delay task comprises a task that if the order of the Taobao is not paid after 1 hour, the task is automatically cancelled, a task that the piece-grouping activity is expired and the piece-grouping activity is not successfully automatically returned, and the like.
S202: screening the first delay task to obtain a second delay task;
in a preferred embodiment of the present invention, as shown in fig. 3, after the delay task is pushed to the warehousing thread pool, the delay task is verified, the delay task passing the verification is persisted in the relational database, then it is determined whether the delay time of the delay task is less than the preload time, and if the delay time is less than the preload time, the delay task is pushed into the time wheel, where the delay time may be understood as the delay execution time of the delay task.
S203, storing the task ID corresponding to the second delay task into a time wheel with a cycle period as preset time according to the corresponding delay time, and encapsulating all the task IDs corresponding to the second delay tasks under each scale of the time wheel to obtain a task group ID;
in a preferred embodiment of the present invention, assuming that the preset time of the time wheel is 60s, the task IDs corresponding to the second delay tasks are written into the time wheel with the cycle period of 60s according to the sequence of the execution times within 60s, the task IDs corresponding to all the second delay tasks at each scale of the time wheel are encapsulated to obtain the task group ID, and when the subsequent delay tasks are executed, the corresponding task bodies can be directly read through the task group ID, so as to complete the processing of the delay tasks. By the method, the delay task is quickly processed, and the business processing pressure under the condition of large data volume is reduced.
S204: when the time wheel passes through a scale, matching the task group ID corresponding to the scale;
s205: and reading the task body corresponding to the task group ID, and uniformly distributing the task body corresponding to the task group ID to each node of a task processing server for task processing.
In the preferred embodiment of the present invention, each time the time wheel passes a scale, the task group ID associated with the scale is calculated, so as to take all the delayed tasks to be executed immediately from the Redis database and deliver the delayed tasks to the message middleware, although the delayed tasks will also generate a certain delay from the transmission of the message middleware to the message consumption, the time period is usually negligible, and the reason why the time period is not so accurate is that the support for large data volume needs to be considered, if the data volume is not large, the time wheel can be replaced by the minimum heap to implement.
Further, after obtaining the first latency task, including,
and performing library and table division on the first delay task in a macro time increment and local hash mode.
In the preferred embodiment of the present invention, after the first delayed task is obtained, the first cached task needs to be stored, at this time, the mandatory consistency database MySQL is selected for storage, which can automatically scan the stored delayed task again after restarting, and a strong transaction mechanism ensures that the written task data is not lost.
Furthermore, under the condition of large data volume, database partitioning and table partitioning are often needed, in the aspect of table partitioning design, tables are macroscopically partitioned in a time increasing mode, and parts of tables are uniformly distributed by using Hash, so that the number of preloaded thread scanning tables is greatly reduced and lock competition is avoided, and before the tables are inserted into a database, actual table insertion positioning needs to be carried out by combining preset rules with a Hash consistency algorithm, and thread pressure is reduced.
Further, the screening the first delay task includes:
checking the first delay task;
and when the first delay task is successfully verified, screening the delay task of which the delay time is less than the preloading time by using the time wheel node to obtain a second delay task.
In the preferred embodiment of the invention, in the process of checking the delay task, unqualified delay tasks or abnormal delay tasks can be filtered out, so that the system is protected from being maliciously attacked, the system security and the data security are improved, and before the time wheel polls to the delay time point of the delay task, the part of delay task data is loaded to a redis memory in advance according to the relation that the delay time is less than the preloading time, so that the task data can be efficiently distributed.
Further, before storing the task ID corresponding to the second delay task into the time wheel with the cycle period as the preset time according to the corresponding delay time, the method includes:
and caching the task body and the task ID corresponding to the second delay task into a delay task database.
In the preferred embodiment of the invention, the delay task is divided into the task ID and the message body, the task ID is read regularly through the time wheel, and the message body is stored and read through the redis database, so that no obvious delay occurs even under the condition of large concurrency.
Further, the matching the task group ID corresponding to the scale includes:
putting the task group ID under the position index of the corresponding time wheel scale according to the delay time;
and matching the task group ID corresponding to the scale according to the position index of the time wheel scale.
In the preferred embodiment of the present invention, as shown in FIG. 4, the preload thread is loaded once by default 60s, for example: the current time is 15:00:00.000, all data that expired between 15:00:01.000 and 15:02:00.000 is preloaded. The preloading delay task is to screen data to be delivered from a database and place the data to a cache, one node is responsible for data screening of a plurality of tables, and one table can only be operated by one node. After the delay tasks are preloaded, calculating the current time position index of the time wheel, grouping the task IDs corresponding to the delay tasks, caching the task bodies corresponding to the delay tasks into a redis database, judging whether the current time position index of the time wheel is smaller than the task group ID position index, pushing the task bodies corresponding to the task group IDs under the current time position index of the time wheel when the current time position index of the time wheel is smaller than the task group ID position index, pushing the task group IDs into the time wheel when the current time position index of the time wheel is larger than or equal to the task group ID position index, and waiting for the time wheel to rotate until the current time position index of the time wheel is smaller than the task group ID position index.
As shown in fig. 5, when the time wheel rotates, the task body corresponding to the task group ID at the current position of the time wheel is read, a push task is generated, the push task is pushed to the delivery thread pool, and then the push task is distributed to the corresponding service machine for processing through the message middleware; and when the pushing of the pushing task to the delivery thread pool fails, pushing the pushing task to a retry thread pool for pushing again, and ensuring that the pushing task is consumed.
Further, the uniformly distributing the task bodies to each node of the task processing server for task processing includes:
and uniformly distributing the task bodies to each node of a task processing server by using message middleware for task processing, wherein the message middleware comprises Rocktmq and Kafka.
In the preferred embodiment of the present invention, for the distribution of the delayed task data, message middleware with a single machine throughput of about 10W level, such as rocktmq, Kafka, etc., is selected, and the delayed task data is uniformly distributed to each service machine for processing.
Further, the preset time is 60s, the time wheel is 300 grids, and the rotation time of the time wheel is 200 ms/grid.
In a preferred embodiment of the invention, one turn of the time wheel may be designed for 60s, in order to reduce the complexity of the time wheel design, removing the additional overhead of maintaining turns and positions for each task. At this time, the scale number of the time wheel is 300 scales, the rotation time of the time wheel is 200 ms/scale, each rotation of the time wheel needs 60s, and the time is consistent with the time represented by preloading.
Further, the time wheel node is responsible for the screening of multiple tables, each table being operable by only one time wheel node.
In the preferred embodiment of the present invention, in order to avoid the repeated loading in the distributed case, a lock-free manner is generally adopted, and the main logic is to simulate one slice of Kafka with a table, and in the preloading stage, one slice can only be processed by one time wheel node, and one time wheel node can process a plurality of slices. Meanwhile, tables are processed periodically, and when the data of the tables exceed a preset period, the tables exceeding the preset period are cleaned, so that the storage pressure of the database is relieved, for example, the tables exceeding 30 days are cleaned.
Referring to fig. 6, an overall framework diagram of delay task processing provided in an embodiment of this specification is that an order service acquires an order message through a node, then requests a thread pool to establish a delay task, the thread pool generates the delay task and then synchronizes delay task data to a relational database, then determines a delay task to be preloaded through determining delay time and preload time, the time wheel dials a thread to rotate a time wheel, reads a corresponding task body under a current scale in the preloaded delay task, pushes the task body to a delivery thread pool, the delivery thread pool pushes the task body to a message middleware in a message form, the message middleware distributes messages, and message traffic is uniformly distributed to each node of the order service for processing. By the method, the condition that service logic needs delay processing can be solved under the condition of large data volume, large-batch processing of delay tasks in a lock-free mode is realized, the processing efficiency of the delay tasks is improved, and repeated consumption of the delay tasks is avoided; and the expansibility is strong, and the number of machine nodes can be dynamically adjusted according to the service load under the condition of not shutting down the machine.
Further, in the process of putting in the delayed task, the sequential writing in the local file system can be selected to replace the writing in the relational database, but the implementation of backup and copy needs to be considered by adopting the scheme, and the copy node can be replaced when the main node is down, so that the continuous execution of the service is maintained.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Fig. 7 is a schematic structural diagram of a processing apparatus for a delayed task according to an embodiment of the present disclosure, including:
a task obtaining module 301, configured to obtain a first delay task at least including a delay time, a task body, and a task ID;
a task screening module 302, configured to screen the first delay task to obtain a second delay task;
the encapsulating module 303 is configured to store the task ID corresponding to the second delay task into a time wheel with a cycle period as a preset time according to the corresponding delay time, and encapsulate all task IDs corresponding to the second delay tasks at each scale of the time wheel to obtain a task group ID;
a matching module 304, configured to match a task group ID corresponding to a scale when the time wheel passes through the scale each time;
and the task processing module 305 is configured to read the task body corresponding to the task group ID, and uniformly distribute the task body corresponding to the task group ID to each node of the task processing server to perform task processing.
Further, the task filtering module 302 includes:
the checking unit is used for checking the first delay task;
and the task screening unit is used for screening the delay tasks with the delay time smaller than the preloading time by using the time wheel nodes to obtain second delay tasks when the first delay tasks are successfully verified.
Further, the matching module 304 includes:
the data storage unit is used for placing the task group ID under the position index of the corresponding time wheel scale according to the delay time;
and the matching unit is used for matching the task group ID corresponding to the scale according to the position index of the time wheel scale.
Further, the task processing module 305 includes:
and the task processing unit is used for uniformly distributing the task bodies to each node of the task processing server by using message middleware for task processing, and the message middleware comprises Rockettq and Kafka.
The functions of the apparatus in the embodiment of the present invention have been described in the above method embodiments, so that reference may be made to the related descriptions in the foregoing embodiments for details that are not described in the present embodiment, and further details are not described herein.
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s201: acquiring a first delay task at least comprising delay time, a task body and a task ID;
s202: screening the first delay task to obtain a second delay task;
s203, storing the task ID corresponding to the second delay task into a time wheel with a cycle period as preset time according to the corresponding delay time, and encapsulating all the task IDs corresponding to the second delay tasks under each scale of the time wheel to obtain a task group ID;
s204: when the time wheel passes through a scale, matching the task group ID corresponding to the scale;
s205: and reading the task body corresponding to the task group ID, and uniformly distributing the task body corresponding to the task group ID to each node of a task processing server for task processing.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Further, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Further, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s201: acquiring a first delay task at least comprising delay time, a task body and a task ID;
s202: screening the first delay task to obtain a second delay task;
s203, storing the task ID corresponding to the second delay task into a time wheel with a cycle period as preset time according to the corresponding delay time, and encapsulating all the task IDs corresponding to the second delay tasks under each scale of the time wheel to obtain a task group ID;
s204: when the time wheel passes through a scale, matching the task group ID corresponding to the scale;
s205: and reading the task body corresponding to the task group ID, and uniformly distributing the task body corresponding to the task group ID to each node of a task processing server for task processing.
Further, for specific examples in this embodiment, reference may be made to the examples described in the above embodiments and optional implementation manners, and details of this embodiment are not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A processing method of a delay task is characterized by comprising the following steps:
acquiring a first delay task at least comprising delay time, a task body and a task ID;
screening the first delay task to obtain a second delay task;
storing the task ID corresponding to the second delay task into a time wheel with a cycle period as preset time according to the corresponding delay time, and encapsulating the task IDs corresponding to all the second delay tasks under each scale of the time wheel to obtain a task group ID;
when the time wheel passes through a scale, matching the task group ID corresponding to the scale;
and reading the task body corresponding to the task group ID, and uniformly distributing the task body corresponding to the task group ID to each node of a task processing server for task processing.
2. The method for processing the delayed task according to claim 1, wherein after acquiring the first delayed task, comprising,
and performing library and table division on the first delay task in a macro time increment and local hash mode.
3. The method for processing the delayed task according to any one of claims 1-2, wherein the screening the first delayed task includes:
checking the first delay task;
and when the first delay task is successfully verified, screening the delay task of which the delay time is less than the preloading time by using the time wheel node to obtain a second delay task.
4. A method for processing a delayed task as claimed in any one of claims 1 to 3, wherein before storing the task ID corresponding to the second delayed task in the time wheel with a cycle period being a preset time according to the corresponding delay time, the method comprises:
and caching the task body and the task ID corresponding to the second delay task into a delay task database.
5. The method for processing the delayed task according to any one of claims 1 to 4, wherein the matching the task group ID corresponding to the scale comprises:
putting the task group ID under the position index of the corresponding time wheel scale according to the delay time;
and matching the task group ID corresponding to the scale according to the position index of the time wheel scale.
6. The method for processing the delayed task according to any one of claims 1 to 5, wherein the uniformly distributing the task bodies to the nodes of the task processing server for task processing comprises:
and uniformly distributing the task bodies to each node of a task processing server by using message middleware for task processing, wherein the message middleware comprises Rocktmq and Kafka.
7. The delayed task processing method according to any one of claims 1-6, wherein the preset time is 60s, the time wheel is 300 frames, and the time wheel rotation time is 200 ms/frame.
8. A method for processing delayed tasks as claimed in any of claims 1 to 7, wherein said time round node is responsible for the screening of multiple tables, each table being operable by only one time round node.
9. An apparatus for processing a delayed task, comprising:
the task obtaining module is used for obtaining a first delay task at least comprising delay time, a task body and a task ID;
the task screening module is used for screening the first delay task to obtain a second delay task;
the encapsulation module is used for storing the task ID corresponding to the second delay task into a time wheel with a cycle period as preset time according to the corresponding delay time, and encapsulating all the task IDs corresponding to the second delay tasks under each scale of the time wheel to obtain a task group ID;
the matching module is used for matching the task group ID corresponding to a scale when the time wheel passes through each scale;
and the task processing module is used for reading the task bodies corresponding to the task group IDs and uniformly distributing the task bodies corresponding to the task group IDs to each node of the task processing server for task processing.
10. An electronic device, wherein the electronic device comprises:
a processor and a memory storing a computer executable program, which when executed, causes the processor to perform the method of any one of claims 1-8.
11. A computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method of any of claims 1-8.
CN202111611399.3A 2021-12-27 2021-12-27 Processing method and device of delay task and electronic equipment Pending CN114265845A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111611399.3A CN114265845A (en) 2021-12-27 2021-12-27 Processing method and device of delay task and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111611399.3A CN114265845A (en) 2021-12-27 2021-12-27 Processing method and device of delay task and electronic equipment

Publications (1)

Publication Number Publication Date
CN114265845A true CN114265845A (en) 2022-04-01

Family

ID=80830336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111611399.3A Pending CN114265845A (en) 2021-12-27 2021-12-27 Processing method and device of delay task and electronic equipment

Country Status (1)

Country Link
CN (1) CN114265845A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114879942A (en) * 2022-05-20 2022-08-09 北京宇信科技集团股份有限公司 Distributed time wheel packet registration verification method, device, medium and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114879942A (en) * 2022-05-20 2022-08-09 北京宇信科技集团股份有限公司 Distributed time wheel packet registration verification method, device, medium and equipment
CN114879942B (en) * 2022-05-20 2023-02-03 北京宇信科技集团股份有限公司 Distributed time wheel group registration verification method, device, medium and equipment

Similar Documents

Publication Publication Date Title
CN108055343B (en) Data synchronization method and device for computer room
CN111104421A (en) Data query method and device based on data interface standard configuration
CN112800095B (en) Data processing method, device, equipment and storage medium
CN110471746B (en) Distributed transaction callback method, device and system
CN109901918B (en) Method and device for processing overtime task
CN111092865B (en) Security event analysis method and system
CN112148504A (en) Target message processing method and device, storage medium and electronic device
CN114265845A (en) Processing method and device of delay task and electronic equipment
CN112598529B (en) Data processing method and device, computer readable storage medium and electronic equipment
CN103647811A (en) A method and an apparatus for application's accessing backstage service
CN112379906A (en) Service updating method, device, storage medium and electronic device
CN112003730A (en) Method, system, terminal and storage medium for rapid cluster deployment
CN111930783A (en) Monitoring method, monitoring system and computing device
CN111752961A (en) Data processing method and device
CN113407551A (en) Data consistency determining method, device, equipment and storage medium
CN111092774A (en) Configuration method and equipment of acquisition gateway
CN115496470A (en) Full-link configuration data processing method and device and electronic equipment
CN115328457A (en) Method and device for realizing form page based on parameter configuration
CN114116676A (en) Data migration method and device, electronic equipment and computer readable storage medium
CN113434525A (en) Cache data updating method and device, storage medium and electronic device
CN113034165A (en) Data processing method and device, storage medium and electronic device
CN111640027A (en) Service data processing method, service data processing device, service processing device and electronic equipment
CN110276212B (en) Data processing method and device, storage medium and electronic device
CN110716747B (en) Program operation efficiency optimization method based on function parameter statistics and terminal equipment
CN108664293B (en) Application control method and device in android system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination