CN116991599A - Method and device for realizing delay queue, computer readable medium and electronic equipment - Google Patents

Method and device for realizing delay queue, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN116991599A
CN116991599A CN202211163927.8A CN202211163927A CN116991599A CN 116991599 A CN116991599 A CN 116991599A CN 202211163927 A CN202211163927 A CN 202211163927A CN 116991599 A CN116991599 A CN 116991599A
Authority
CN
China
Prior art keywords
task
delay
identification information
queue
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211163927.8A
Other languages
Chinese (zh)
Inventor
林炳东
康进
徐辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211163927.8A priority Critical patent/CN116991599A/en
Publication of CN116991599A publication Critical patent/CN116991599A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer And Data Communications (AREA)

Abstract

The embodiment of the application provides a method and a device for realizing a delay queue, a computer readable medium and electronic equipment, wherein the method comprises the following steps: after task information of a delay task submitted by a client is received, task identification information and expiration time are stored in a database; scanning the database to put task identification information of the expired delay task in the database into a ready queue; taking out the task identification information and putting the task identification information into an operation queue, and delivering task information corresponding to the task identification information to a client so that the client can process delay tasks corresponding to the task information; and if the confirmation message corresponding to the task information is not received from the client, the task identification information corresponding to the task information in the running queue is restored in the database so as to retry processing the delay task. The embodiment of the application is based on the message queue, can prevent delay messages from being lost, and ensures the reliability of data.

Description

Method and device for realizing delay queue, computer readable medium and electronic equipment
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a method and an apparatus for implementing a delay queue, a computer readable medium, and an electronic device.
Background
Delay queues, i.e., message queues with delay functions. With the rapid development of internet services, delay queues are widely used.
However, when the existing delay queues deliver delay messages, data loss may occur, resulting in poor data reliability.
Disclosure of Invention
The embodiment of the application provides a method and a device for realizing a delay queue, a computer readable medium and electronic equipment, which can prevent delay information from being lost at least to a certain extent and ensure the reliability of data.
Other features and advantages of the application will be apparent from the following detailed description, or may be learned by the practice of the application.
According to an aspect of the embodiment of the present application, there is provided a method for implementing a delay queue, the method including: after task information of a delay task submitted by a client is received, task identification information and expiration time of the delay task are stored in a database; scanning the database to put task identification information of the expired delay task in the database into a ready queue according to the expiration time; taking out task identification information from the ready queue, putting the task identification information into an operation queue, and delivering task information corresponding to the task identification information to the client so that the client can process delay tasks corresponding to the task information; and if a confirmation message corresponding to the task information is not received from the client, re-storing the task identification information corresponding to the task information in the running queue into the database so as to re-try to process the delay task corresponding to the task information, wherein the confirmation message is used for indicating that the client processes the corresponding delay task.
According to an aspect of the embodiment of the present application, there is provided an implementation apparatus for a delay queue, the apparatus including: the storage unit is used for storing the task identification information and the expiration time of the delay task into the database after receiving the task information of the delay task submitted by the client; the scanning unit is used for scanning the database so as to put task identification information of the delayed task expired in the database into a ready queue according to the expiration time; the extracting and delivering unit is used for taking out the task identification information from the ready queue, putting the task identification information into an operation queue, and delivering the task information corresponding to the task identification information to the client so that the client can process the delay task corresponding to the task information; and the retry unit is used for re-storing the task identification information corresponding to the task information in the running queue into the database to retry processing the delay task corresponding to the task information if the confirmation message corresponding to the task information is not received from the client, wherein the confirmation message is used for indicating that the client processes the corresponding delay task.
In some embodiments of the application, based on the foregoing scheme, the retry unit is configured to: and determining new expiration time according to the current time, and re-storing the task identification information corresponding to the task information and the new expiration time in the running queue in the database, so that when the confirmation message corresponding to the task information is not received from the client after the new expiration time is reached, the task identification information is re-fetched from the database and put in a ready queue to re-try to process the delay task corresponding to the task information.
In some embodiments of the application, based on the foregoing scheme, the retry unit is configured to: and if the confirmation message corresponding to the task information is not received from the client and the retry frequency of the task information does not reach a preset frequency threshold, the task identification information corresponding to the task information in the running queue is restored in the database.
In some embodiments of the application, based on the foregoing, the retry unit is further configured to, after fetching the task identification information from the ready queue and placing it in a run queue: if the confirmation message corresponding to the task information is received from the client, the task identification information corresponding to the task information is taken out from the running queue and discarded; if the confirmation message corresponding to the task information is not received from the client and the retry number of the task information reaches a preset number threshold, the task identification information is put into a dead mail queue, and the dead mail queue is used for storing the task identification information of the delay task of which the client does not feed back the confirmation message when the retry number reaches the preset number threshold.
In some embodiments of the application, based on the foregoing scheme, the saving unit is configured to: and storing the task identification information and the expiration time of the delay task as bucket elements of the buckets in a bucket group into a database, so as to store the bucket group through the database, wherein the bucket group comprises a plurality of buckets.
In some embodiments of the application, based on the foregoing, the scanning unit is configured to: and scanning the corresponding barrels in the barrel group through threads corresponding to each barrel, so as to put task identification information of the delayed task expired in each barrel into a ready queue according to the expiration time.
In some embodiments of the present application, based on the foregoing solution, before delivering the task information corresponding to the task identification information to the client, the extracting and delivering unit is further configured to: receiving a long polling request initiated by a client, and suspending the long polling request; if no task identification information exists in the ready queue within a preset time period after the long polling request is received, returning an empty result to the client so that the client can reinitiate the long polling request after receiving the empty result; the pick-up and delivery unit is configured to: and if the task identification information exists in the ready queue within a preset time period after the long polling request is received, the task information corresponding to the task identification information is transmitted to the client.
In some embodiments of the application, based on the foregoing, the scanning unit is configured to: according to the expiration time, task identification information of the expired delay task in the database is put into a ready queue corresponding to the type of the delay task; the pick-up and delivery unit is configured to: and taking out the task identification information from the ready queue corresponding to the type of the delay task and putting the task identification information into the running queue corresponding to the type of the delay task.
In some embodiments of the application, based on the foregoing scheme, the saving unit is configured to: taking the task identification information of the delay task as an element, taking the expiration time of the delay task as a score associated with the element, and correspondingly storing the element and the score associated with the element into an ordered set of a database; the scanning unit is configured to: and screening out elements with associated scores in a designated score interval in the ordered set of the database to obtain task identification information of the delayed task expired in the database, and placing the task identification information into a ready queue.
According to an aspect of the embodiments of the present application, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method of implementing a delay queue as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic apparatus including: one or more processors; and a storage device for storing one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the method for implementing a delay queue as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided a computer program product including computer instructions stored in a computer-readable storage medium, from which computer instructions a processor of a computer device reads, the processor executing the computer instructions, causing the computer device to perform a method of implementing a delay queue as described in the above embodiments.
In the technical schemes provided by some embodiments of the present application, after task information corresponding to task identification information is delivered to a client, under the condition that a confirmation message corresponding to the task information is not received from the client, that is, when the client does not normally process and complete a corresponding time-delay task, the task identification information corresponding to the task information in an operation queue is restored in a database, so that the time-delay task corresponding to the task information can be retried, the occurrence of message loss caused by downtime or processing failure of the client after the message delivery can be prevented, and the reliability of data is ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is evident that the drawings in the following description are only some embodiments of the present application and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art. In the drawings:
FIG. 1 is a schematic diagram of a delay queue in the related art;
FIG. 2 shows a schematic diagram of an exemplary system architecture in which the technical solution of an embodiment of the present application may be implemented;
FIG. 3 illustrates a flow chart of a method of implementing a delay queue according to one embodiment of the application;
FIG. 4 illustrates a general architecture diagram of a delay queue according to one embodiment of the application;
FIG. 5 shows a flowchart of the details of step 310 in FIG. 3, according to one embodiment of the application;
FIG. 6 shows a flowchart of the details of steps 310 and 320 of FIG. 3, according to one embodiment of the application;
FIG. 7 illustrates a schematic diagram of a bucket group according to one embodiment of the application;
FIG. 8 shows a flowchart of the details of step 320 of FIG. 5, in accordance with one embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a client performing long polling of a delay queue according to one embodiment of the application;
FIG. 10 illustrates a technical summary diagram for implementing message timeliness adoption in accordance with one embodiment of the application;
FIG. 11A shows a flowchart of details of step 340 of FIG. 3, according to one embodiment of the application;
FIG. 11B shows a flowchart of details of step 340 of FIG. 3, according to another embodiment of the present application;
FIG. 12 illustrates a schematic diagram of implementing data reliability according to one embodiment of the application;
FIG. 13 illustrates an effect diagram of a solution according to one embodiment of the application in terms of message timeliness;
FIG. 14 shows a block diagram of an implementation of a delay queue according to one embodiment of the application;
fig. 15 shows a schematic diagram of a computer system suitable for use in implementing an embodiment of the application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the application may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
There are currently two main delay queuing schemes in the related art:
scheme one: the RocketMQ is an open source message queue that also provides the functionality to delay messages. The main principle is the Timer + queue. Each delay time corresponds to one Timer and one queue. Messages of the same delay duration are placed in the same queue. The messages in each queue are ordered from small arrivals by expiration time. Since different delay time lengths correspond to different timers and queues, any delay cannot be specified. The delay time only supports 18 fixed time durations, which are 1s 5s 10s. The longest delay can only be supported for two hours.
Scheme II: the TDMQ uses a time round queue adding mode, uses a physical theme of high level to store delay information, the index of the delay information and the original information are respectively stored in different themes, only the information id is moved in the time round, and the information content does not move along with the time round.
Fig. 1 shows a schematic diagram of a delay queue in the related art. Referring to fig. 1, the TDMQ may include the following procedures:
1. the producer sends a message to the broker node, the server determines that the message is a delayed message, sends the delayed message to the day-delay-topic-x, and sends a message index (messageId) to the index topic at the corresponding time.
2. The delay message index is divided according to the day, hour and minute, the current minute index is loaded in the memory, the memory time wheel is put into the memory, the executor reads the message id of the current second, and the delay message is pulled from the day-delay-topic-x and delivered to the original theme of the service.
3. The delay message is split into different topics according to the day, so that the delay time is ensured that the data of the same day are all in the same delay topic, and the delay message is convenient to clean (the delay topic of the day can be deleted after the delay message of the day is processed).
However, both of the above delay queuing schemes have certain drawbacks:
1. the delayed message of the dockmq only supports 18 fixed durations of 1s 5s 10s. The longest delay can only be supported for two hours. Any time length cannot be specified, and flexibility is poor.
2. TDMQ has a maximum delay time of 10 days and cannot support delay messages of more than 10 days.
In addition, other delay queue schemes have the problems that the real-time performance of the message is poor, and the message is lost possibly, so that the data reliability is poor.
Therefore, the application firstly provides a method for realizing the delay queue. The realization method of the delay queue provided by the embodiment of the application can overcome the defects, not only can support delay with any time length and improve the flexibility of the delay queue, but also can improve the real-time performance of the message and further ensure the reliability of the data.
Fig. 2 shows a schematic diagram of an exemplary system architecture in which the technical solution of an embodiment of the present application may be implemented. Referring to fig. 2, the system architecture 200 may include: the delay queue server 210, the e-commerce platform server 220 and a plurality of user terminals, wherein the plurality of user terminals specifically comprise a first user terminal 231, a second user terminal 232 and a third user terminal 233, communication connection is established between each user terminal and the e-commerce platform server 220 and between the delay queue server 210 and the e-commerce platform server 220, e-commerce platform clients are operated on each user terminal, e-commerce platform servers 220 are deployed with e-commerce platform servers, the e-commerce platform servers are also embedded with delay queue clients SDKs, and the delay queue servers 210 are deployed with delay queue servers. Taking the delay queue server 210 as an example of an execution terminal in the embodiment of the present application, when the implementation method of the delay queue provided by the present application is applied to the system architecture shown in fig. 2, one process may be as follows: firstly, a user of a certain user terminal accesses an e-commerce platform service end in an e-commerce platform server 220 through an e-commerce platform client on the user terminal, so that an order for a certain commodity is submitted to the e-commerce platform, but the order is not paid; the e-commerce platform server 220 sends a message corresponding to the order to a delay queue server in the delay queue client SDK through a delay queue client SDK embedded in the e-commerce platform server, and the delay time is designated for the message; when the message expires, the delay queue server returns the message to the delay queue client SDK, and the E-commerce platform server can process the message, and at the moment, if the user still does not pay for the order, the E-commerce platform server can cancel the order. In the whole process, the delay queue server can timely and reliably deliver the message to the delay queue client SDK.
In some embodiments of the present application, the delay queue server comprises a task pool, a database, a ready queue, a run queue, a dead letter queue, and a plurality of scan threads.
In some embodiments of the application, the delay time specified for the message corresponding to the order is 30 minutes.
It should be understood that the number of delay queue servers, e-commerce platform servers, and user terminals in fig. 2 are merely illustrative. There may be any number of delay queue servers, e-commerce platform servers, and user terminals as desired for implementation. For example, the delay queue server and/or the e-commerce platform server may be a server cluster formed by a plurality of servers, and the number of the user terminals may be less than three or more than three.
It should be noted that fig. 2 shows only one embodiment of the present application. Although in the solution of the embodiment of fig. 2, the solution is applied to an e-commerce platform, in other embodiments of the present application, the solution may also be applied to other various network service platforms; although in the solution of the embodiment of fig. 2, the solution is specifically applied to a scenario where an order is overtime and is not paid for automatic cancellation, in other embodiments of the present application, the solution may also be applied to other scenarios of an e-commerce platform, for example, the scenario where goods are automatically put on and taken off the shelf or where automatic five stars are not evaluated within seven days after goods are received; although in the solution of the embodiment of fig. 2, the delay queue server only serves one delay queue client, in other embodiments of the present application, the delay queue server may serve multiple delay queue clients simultaneously as a centralized server, and each delay queue client may be applied to different platforms or scenarios; although in the solution of the embodiment of fig. 2, the delay queue client and the delay queue server are both disposed on the server, in other embodiments of the present application, the delay queue client and the delay queue server may be disposed on other types of terminal devices, such as a vehicle-mounted terminal, a smart phone, a desktop computer, and the like, respectively. The embodiments of the present application should not be limited in any way, nor should the scope of the application be limited in any way.
It is easy to understand that the implementation method of the delay queue provided by the embodiment of the application is generally executed by a server, and correspondingly, the implementation device of the delay queue is generally arranged in the server. However, in other embodiments of the present application, the terminal device may also have a similar function as the server, so as to execute the implementation scheme of the delay queue provided by the embodiments of the present application.
Therefore, the embodiment of the application can be applied to the terminal or the server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms. The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
The scheme of the embodiment of the application can be applied to the field of cloud computing. Cloud computing (clouding) is a computing model that distributes computing tasks across a large pool of computers, enabling various application systems to acquire computing power, storage space, and information services as needed. The network that provides the resources is referred to as the "cloud". Resources in the cloud are infinitely expandable in the sense of users, and can be acquired at any time, used as needed, expanded at any time and paid for use as needed.
As a basic capability provider of cloud computing, a cloud computing resource pool (cloud platform for short, generally referred to as IaaS (Infrastructure as a Service, infrastructure as a service) platform) is established, in which multiple types of virtual resources are deployed for external clients to select for use.
According to the logic function division, a PaaS (Platform as a Service ) layer can be deployed on an IaaS (Infrastructure as a Service ) layer, and a SaaS (Software as a Service, software as a service) layer can be deployed above the PaaS layer, or the SaaS can be directly deployed on the IaaS. PaaS is a platform on which software runs, such as a database, web container, etc. SaaS is a wide variety of business software such as web portals, sms mass senders, etc. Generally, saaS and PaaS are upper layers relative to IaaS.
The implementation details of the technical scheme of the embodiment of the application are described in detail below:
fig. 3 shows a flowchart of a method for implementing a delay queue according to an embodiment of the present application, where the method for implementing a delay queue may be performed by various devices capable of calculating and processing, such as a user terminal or a cloud server, and the user terminal includes, but is not limited to, a mobile phone, a computer, a smart voice interaction device, a smart home appliance, a vehicle-mounted terminal, a wearable device, and the like. The embodiment of the application can be applied to various scenes, including but not limited to cloud technology, artificial intelligence, intelligent transportation, auxiliary driving and the like. Referring to fig. 3, the implementation method of the delay queue at least includes the following steps:
in step 310, after task information of the delay task submitted by the client is received, task identification information and expiration time of the delay task are stored in a database.
Task information (job message) of the delay task is delay message, and the client is a submitter of the delay message. In an e-commerce scenario, the task information of the delay task may include information related to an order, such as an order number.
The task identification information of the delay task can be a character string, wherein the character string can contain contents such as letters, numbers and the like; the task identification information of the delay task can be generated by the client and submitted to the delay queue server, or can be generated by the delay queue server according to the task information after the delay queue server receives the task information of the delay task submitted by the client. It is easy to understand that the task identification information and the task information both correspond to the delay task.
The client can specify the delay time of the delay task while submitting the task information of the delay task, and the delay queue server can calculate the expiration time of the delay task according to the formula of expiration time = current time + delay time. Of course, in other embodiments of the present application, the client may also directly specify the expiration time of the delay task to the delay queue server.
The database may be a non-relational database, such as a Redis (Remote Dictionary Server, remote dictionary service) database.
The scheme of the embodiment of the present application is further described below with reference to fig. 4. The delay queue server is the delay queue. Fig. 4 shows a schematic diagram of the overall architecture of a delay queue according to one embodiment of the application. Referring to fig. 4, the delay Queue specifically includes several sub-modules of a task Pool (Job Pool), a bucket group (bucket), a Ready Queue (Ready Queue), a Running Queue (Running Queue), a dead message Queue (Dead Letter Queue), and Timer, poller, cleaner several scanning threads.
After task information of a delay task submitted by a client is received, the task information is stored in a task pool, specifically, task identification information (jobId) can be used as a key, serialized task information is used as a value, and a data structure of Redis, namely string, is used for storing the key value pair.
Fig. 5 shows a flowchart of the details of step 310 in fig. 3, according to one embodiment of the application. Referring to fig. 5, storing task identification information and expiration time of a delay task in a database may specifically include the following steps:
in step 310', task identification information and expiration time of the delayed task are stored as bucket elements of the buckets in the bucket group in a database to store the bucket group through the database.
The tub group includes a plurality of tub.
With continued reference to fig. 4, the bucket group may be stored using a Redis database, where the bucket group includes a plurality of buckets (buckets), and elements in each bucket are bucket elements (bucket items), and each bucket element may include task identification information of a delay task and an expiration time, and may include a plurality of bucket elements, and after the bucket elements are stored in the Redis database, the bucket elements in each bucket may be sorted from small to large according to the expiration time in each bucket element.
In one embodiment of the present application, storing task identification information and expiration time of a delayed task as bucket elements of a bucket in a bucket group in a database to store the bucket group through the database, comprising: converting task identification information of the delay task based on a CRC32 algorithm to obtain a converted numerical value; performing remainder taking operation on the number of barrels in the barrel group based on the converted numerical value, and taking the barrel numbered as a remainder taking result in the barrel group as a target barrel; and storing the task identification information and the expiration time of the delay task as bucket elements into a target bucket.
Specifically, the task identification information in the form of a character string can be converted into a numerical value which is a converted numerical value by using a CRC32 algorithm; on this basis, the target bucket is determined by taking the remainder of the number of buckets in the bucket group based on the converted numerical value, so that a plurality of pieces of task identification information can be stored in a plurality of buckets in a scattered and uniform manner.
In step 320, the database is scanned to place task identification information of the expired delay tasks in the database into a ready queue according to the expiration time.
According to the expiration time, which delay task has expired can be determined, and corresponding task identification information can be fetched from the Redis database. Task identification information for the delayed task placed in the ready queue is deleted from the database.
Fig. 6 shows a flowchart of the details of steps 310 and 320 of fig. 3, according to one embodiment of the application. Referring to fig. 6, storing task identification information and expiration time of a delay task in a database may specifically include the following steps:
in step 310", the elements and the scores associated with the elements are stored in an ordered set of the database with the task identification information of the delayed task as the elements and the expiration time of the delayed task as the score associated with the elements.
The ordered set of databases, i.e. the dissolved set of Redis, may also be referred to as zset. The task identification information may be taken as an element or member (member) of zset and the expiration time as a score (socre).
Step 320 may specifically include the following steps:
in step 320", the task identification information of the delayed task expired in the database is obtained by filtering out the elements of the score associated in the ordered set of the database within the specified score interval, and the task identification information is put into the ready queue.
Task identification information of an expired delay task can be rapidly screened out through a zrangeByScore command provided by Redis, and the zrangeByScore command screens the task identification information after the task identification information is sequenced according to corresponding scores, so that screening efficiency can be improved.
The screening of the elements of the associated score in the designated score interval may specifically be screening the task identification information of which the corresponding expiration time is less than the current time.
In the embodiment of the application, the delay queue can support delay of any time length by storing the elements and the scores associated with the elements in the ordered set of the database.
Fig. 7 shows a schematic diagram of a bucket group according to an embodiment of the application. Referring to fig. 7, after a client submits a delay message, a bucket group (bucket) is responsible for storing task identification information and expiration time of a delay task. Specifically, it is determined by hash operation to which bucket (bucket) the task identification information and the expiration time of the delay task are stored, and each bucket correspondingly stores a Zset Score and a Zset module, where Zset Score is the expiration time and Zset module is the task identification information.
Fig. 8 shows a flowchart of the details of step 320 in fig. 5, according to one embodiment of the application. Referring to fig. 8, step 320 may further include the following steps:
in step 320', the corresponding buckets in the bucket group are scanned by the thread corresponding to each bucket to place the task identification information of the expired delay task in each bucket into the ready queue according to the expiration time.
Referring to fig. 4 and 7, the Timer thread is responsible for scanning the bucket group, and may set a Timer thread responsible for scanning each bucket, where the Timer thread may put task identification information of an expired delay task into the ready queue.
In the embodiment of the application, the barrel components are divided into a plurality of barrels, the task identification information is stored in a scattered manner by utilizing the barrels, so that the number of the task identification information stored in each barrel can be reduced, on the basis, each Timer thread can scan each barrel in parallel by correspondingly arranging one Timer thread for each barrel, the scanning efficiency is greatly improved, and when the task identification information of the expired delay task is screened out by using a zrangeByScore command, the complexity of 1 and time can reach O (log (N) +M), wherein N is the number of barrel elements in the barrel, and M is the number of the returned task identification information.
In one embodiment of the present application, storing task identification information and expiration time of a delayed task in a database includes: storing task identification information and expiration time of the delay task into a database, and sequencing the task identification information in the database according to the sequence from the smaller expiration time to the larger expiration time;
and placing task identification information of the expired delay task in the database into a ready queue according to the expiration time, wherein the task identification information comprises: and extracting the task identification information of the time delay task which is expired in the extracted task identification information from the database at intervals of preset time length, and placing the task identification information of the time delay task which is expired in the extracted task identification information into a ready queue.
Specifically, the task identification information in each bucket can be sequenced from small to large according to the expiration time corresponding to each task identification information, each Timer thread takes out the first n task identification information from the corresponding bucket every 1 second, filters out the task identification information of the expired delay task from the taken out task identification information, and then puts the task identification information of the expired delay task into the ready queue.
In step 330, the task identification information is fetched from the ready queue and put into the run queue, and the task information corresponding to the task identification information is delivered to the client, so that the client processes the delay task corresponding to the task information.
The ready queue stores task identification information of the expired delay task. After the delay task expires, the delay task needs to be processed, and the task identification information of the delay task being processed is stored in the running queue. The ready queue and the run queue may employ list storage of Redis.
With continued reference to FIG. 4, the task ID stored in the ready queue is placed in the run queue by the Poller thread. And the Poller thread can take out task identification information from the ready queue in a blocking manner according to the poll request of the client, put the task identification information taken out by the atomic operation into the running queue, and then take out task information corresponding to the task identification information from the task pool and return the task information to the client. The poll request is a polling request initiated by the client to the delay queue server for acquiring task information.
In the e-commerce scene, after receiving the task information, the client can perform corresponding operation according to the order number in the task information.
In one embodiment of the present application, placing task identification information of a delayed task expired in a database in a ready queue according to an expiration time includes: according to the expiration time, task identification information of the expired delay task in the database is put into a ready queue corresponding to the type of the delay task;
And taking out the task identification information from the ready queue and putting the task identification information into a running queue, wherein the method comprises the following steps of: and taking out the task identification information from the ready queue corresponding to the type of the delay task and putting the task identification information into the running queue corresponding to the type of the delay task.
With continued reference to fig. 4, a plurality of topics are provided, each topic corresponding to a ready queue, each topic also corresponding to a run queue, and each ready queue and run queue may store one or more task IDs (task identification information). One beller thread can be correspondingly arranged for each topic, and each beller thread can take out task identification information from a ready queue corresponding to the beller thread and put the task identification information into a running queue corresponding to the beller thread.
topic, i.e. theme or type, may correspond to the same or different topic, the same client may also correspond to the same or different topic, and one topic may indicate a delayed task in one scenario. Therefore, the delay queues in the embodiment of the application can process delay tasks of different scenes at the same time, and the delay queues can be used as centralized delay service to be opened to users of different scenes or even fields. It will be readily appreciated that while only task IDs are stored in the ready queue and run queue of fig. 4, other information may be stored in other embodiments of the present application.
In one embodiment of the present application, before delivering the task information corresponding to the task identification information to the client, the method for implementing the delay queue further includes: receiving a long polling request initiated by a client, and suspending the long polling request; if no task identification information exists in the ready queue within a preset time period after the long polling request is received, returning an empty result to the client so that the client can reinitiate the long polling request after receiving the empty result;
delivering the task information corresponding to the task identification information to the client, wherein the delivering comprises the following steps:
and if the task identification information exists in the ready queue within a preset time period after the long polling request is received, delivering the task information corresponding to the task identification information to the client.
The long polling request initiated by the client is a poll request which can be processed by the poll thread. Fig. 9 is a schematic diagram of a client performing long polling on a delay queue according to one embodiment of the application. Referring to fig. 9, after receiving a long polling request, the delay queue server suspends the request, and then obtains task identification information corresponding to expiration of topic from a ready queue; if the task identification information which corresponds to the topic and expires does not exist in the ready queue within 30 seconds, returning an empty result to the client so as to respond to the client; if the task identification information corresponding to the topic and expired exists in the ready queue within 30 seconds, the task identification information is immediately returned to the client.
In the embodiment of the application, the client SDK can carry out long polling on the delay queue server, and the client SDK can not frequently poll the delay queue server, so that the pressure of the server is greatly reduced; meanwhile, under the condition that the expired task identification information exists, the obtained task identification information can be consumed by the client immediately, and therefore the real-time performance of the message is improved.
FIG. 10 illustrates a technique overview diagram for implementing message timeliness adoption, according to one embodiment of the application. Referring to fig. 10, to implement message timeliness, the following three technologies are mainly adopted:
1. expiration information is quickly screened out using Redis Zset and arbitrary delays are supported.
2. The packets are scanned in parallel, the length of a queue is shortened, and the efficiency is improved.
3. Long polling is realized, the consumption is immediately carried out after the message is expired, and meanwhile, the pressure of a client side to a server side is reduced.
Based on the above technology, the effect that the average delay of message delivery is less than 1s can be achieved.
In step 340, if the acknowledgement message corresponding to the task information is not received from the client, the task identification information corresponding to the task information in the running queue is restored in the database, so as to retry processing the delay task corresponding to the task information.
The confirmation message is used for indicating that the client processing completes the corresponding delay task.
After the client processes the corresponding delay task, a confirmation message (ACK) is returned to the delay queue server, the task information is deleted from the task pool at the moment, and then the task identification information is deleted from the database according to the fact that the task information corresponding to a certain task identification information does not exist in the task pool; if the delay queue server side does not receive the confirmation message corresponding to the task information from the client side, the client side is not correctly processed to complete the corresponding delay task, and at the moment, the subsequent flow is executed by re-storing the task identification information corresponding to the task information in the operation queue in the database so as to re-process the corresponding delay task.
With continued reference to fig. 4, the Cleaner thread is a thread for cleaning a running queue, and a corresponding Cleaner thread may be set for the running queue under each topic, where the Cleaner thread is responsible for re-storing task identification information corresponding to task information in the running queue in the database without receiving an acknowledgement message corresponding to the task information from the client.
FIG. 11A shows a flowchart of the details of step 340 of FIG. 3, according to one embodiment of the application. Referring to fig. 11A, the re-storing task identification information corresponding to task information in the running queue in the database to re-attempt to process the delay task corresponding to the task information may specifically include the following steps:
In step 340', the new expiration time is determined according to the current time, and the task identification information corresponding to the task information and the new expiration time in the running queue are restored in the database, so that when the confirmation message corresponding to the task information is not received from the client after the new expiration time is reached, the task identification information is taken out from the database again and put in the ready queue, so as to retry processing the delay task corresponding to the task information.
Specifically, the new expiration time may be calculated by "new expiration time=current time+ttr", ttr is a preset time length for which the client can feed back the acknowledgement information, that is, if the client does not feed back the ACK yet when the ttr time elapses, the task identification information is put into the ready queue, and then the task information corresponding to the task identification information is delivered to the client again.
In the embodiment of the application, the allowance can be reserved for the condition that the processing time of the client is too long by setting the new expiration time, so that the resource waste caused by direct redelivery is avoided.
Fig. 11B shows a flowchart of the details of step 340 of fig. 3 according to another embodiment of the present application. Referring to fig. 11B, if a confirmation message corresponding to the task information is not received from the client, the task identification information corresponding to the task information in the running queue is restored in the database, which may specifically include the following steps:
In step 340", if the acknowledgement message corresponding to the task information is not received from the client and the number of retries of the task information does not reach the preset number threshold, the task identification information corresponding to the task information in the running queue is restored in the database.
Specifically, referring to fig. 4, the Cleaner thread polls the running queue under each topic, extracts n pieces of task identification information from the running queue each time, and determines whether the retry number of the task information corresponding to each piece of task identification information reaches a preset number threshold; if the preset number of times threshold is not reached, the corresponding task identification information is put into the bucket again, and if the client does not feed back ACK after the ttr time, the task identification information is put into the ready queue.
In the embodiment of the application, the task information can be delivered for a plurality of times, so that the data loss caused by the fact that the task information is not normally consumed in one delivery is avoided.
In one embodiment of the present application, after the task identification information is fetched from the ready queue and placed in the run queue, the method for implementing the delay queue further includes:
if the confirmation message corresponding to the task information is received from the client, the task identification information corresponding to the task information is taken out from the running queue and discarded; if the confirmation message corresponding to the task information is not received from the client and the retry number of the task information reaches a preset number threshold, the task identification information is put into a dead mail queue, and the dead mail queue is used for storing the task identification information of the delay task of which the client still does not feed back the confirmation message when the retry number reaches the preset number threshold.
With continued reference to fig. 4, when the acknowledgement message corresponding to the task information is not received from the client and the retry number of the task information reaches the preset number threshold, the clear thread will also put the task identification information into the dead message queue, which indicates that the client cannot consume the message normally at present. The task identification information in the dead letter queue may be re-tried after a period of time. After the task identification information is placed in the dead letter queue, the corresponding task information is deleted from the task pool.
Specifically, dead letter queues corresponding to the respective topics may be set. An important feature for delay queues is data reliability, which must ensure that delay messages delivered by clients cannot be lost, and that clients must consume after a specified delay (unless the client itself is out of service).
Fig. 12 shows a schematic diagram of implementing data reliability according to one embodiment of the application. Referring to fig. 12, in order to prevent a message from being lost due to sudden network interruption when task information is fetched and delivered to a client, or a client crashes after receiving the message and is not processed yet, a beller thread uses a brpoppush command of Redis to fetch a task ID from a ready queue and put the task ID into an operation queue, and delivers task information corresponding to the task ID to the client, if the client normally consumes the task information, an ACK is performed; the Cleaner thread takes out part of task identification information from the corresponding running queue each time, and if the client feeds back ACK, the task identification information corresponding to the task information is directly discarded; if the client does not feed back the ACK and exceeds the retry times, the abnormal service of the client is indicated, and the abnormal service is still not consumed normally after multiple delivery, and the corresponding task identification information is put into a dead letter queue at the moment; if the client does not feed back the ACK and the retry number is not exceeded, the corresponding task identification information is put into the bucket group again, and the next Timer thread sweeps the task identification information out of the bucket group and puts the task identification information into a ready queue.
Therefore, based on the mechanism shown in the embodiment of fig. 12, it can be ensured that even if a network abnormality occurs, the client is temporarily unavailable, etc., the final client can still consume the delayed message normally.
Fig. 13 shows a schematic diagram of the effect of the scheme according to one embodiment of the application on message timeliness. Referring to fig. 13, the abscissa is time, and the ordinate is average delay of message delivery, it can be seen that the average delay of messages under any topic when delivered at each moment is less than 1 second, which shows that the embodiment of the application has stronger instantaneity.
In summary, according to the implementation method of the delay queue provided by the embodiment of the application, the delay queue is implemented for the first time based on the Redis Zset, and any delay time is supported to be specified; by using Redis Zset, buckets grouping and realizing client-side long polling, the method ensures that the consumer can consume the message immediately after the message expires (after the appointed delay time), has high real-time performance and has average delivery delay less than 1 second; by introducing an operation queue, a message ACK mechanism, a retry strategy and a dead message queue, the message loss caused by downtime or processing failure of a client after the message delivery is prevented, and the reliability of the message is ensured.
The following describes an embodiment of the apparatus of the present application, which may be used to implement the method for implementing the delay queue in the foregoing embodiment of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to an embodiment of the method for implementing a delay queue according to the present application.
Fig. 14 shows a block diagram of an implementation of a delay queue according to one embodiment of the application.
Referring to fig. 14, an implementation apparatus 1400 of a delay queue according to an embodiment of the present application includes: a save unit 1410, a scan unit 1420, an extract and deliver unit 1430, and a retry unit 1440. The storage unit 1410 is configured to store task identification information and expiration time of a delay task submitted by a client into a database after receiving the task information of the delay task; the scanning unit 1420 is configured to scan the database to place task identification information of a delayed task expired in the database into a ready queue according to the expiration time; the extracting and delivering unit 1430 is configured to take out task identification information from the ready queue, put the task identification information into an operation queue, and deliver task information corresponding to the task identification information to the client, so that the client processes a delay task corresponding to the task information; the retry unit 1440 is configured to, if a confirmation message corresponding to the task information is not received from the client, re-store task identification information corresponding to the task information in the running queue in the database, so as to retry processing the delay task corresponding to the task information, where the confirmation message is used to indicate that the client processes the delay task.
In some embodiments of the present application, based on the foregoing scheme, the retry unit 1440 is configured to: and determining new expiration time according to the current time, and re-storing the task identification information corresponding to the task information and the new expiration time in the running queue in the database, so that when the confirmation message corresponding to the task information is not received from the client after the new expiration time is reached, the task identification information is re-fetched from the database and put in a ready queue to re-try to process the delay task corresponding to the task information.
In some embodiments of the present application, based on the foregoing scheme, the retry unit 1440 is configured to: and if the confirmation message corresponding to the task information is not received from the client and the retry frequency of the task information does not reach a preset frequency threshold, the task identification information corresponding to the task information in the running queue is restored in the database.
In some embodiments of the present application, based on the foregoing, the retry unit 1440 is further configured to, after fetching the task identification information from the ready queue and placing it in the run queue: if the confirmation message corresponding to the task information is received from the client, the task identification information corresponding to the task information is taken out from the running queue and discarded; if the confirmation message corresponding to the task information is not received from the client and the retry number of the task information reaches a preset number threshold, the task identification information is put into a dead mail queue, and the dead mail queue is used for storing the task identification information of the delay task of which the client does not feed back the confirmation message when the retry number reaches the preset number threshold.
In some embodiments of the present application, based on the foregoing scheme, the saving unit 1410 is configured to: and storing the task identification information and the expiration time of the delay task as bucket elements of the buckets in a bucket group into a database, so as to store the bucket group through the database, wherein the bucket group comprises a plurality of buckets.
In some embodiments of the present application, based on the foregoing scheme, the scanning unit 1420 is configured to: and scanning the corresponding barrels in the barrel group through threads corresponding to each barrel, so as to put task identification information of the delayed task expired in each barrel into a ready queue according to the expiration time.
In some embodiments of the present application, based on the foregoing solution, before delivering the task information corresponding to the task identification information to the client, the extracting and delivering unit 1430 is further configured to: receiving a long polling request initiated by a client, and suspending the long polling request; if no task identification information exists in the ready queue within a preset time period after the long polling request is received, returning an empty result to the client so that the client can reinitiate the long polling request after receiving the empty result; the extraction and delivery unit 1430 is configured to: and if the task identification information exists in the ready queue within a preset time period after the long polling request is received, the task information corresponding to the task identification information is transmitted to the client.
In some embodiments of the present application, based on the foregoing scheme, the scanning unit 1420 is configured to: according to the expiration time, task identification information of the expired delay task in the database is put into a ready queue corresponding to the type of the delay task; the extraction and delivery unit 1430 is configured to: and taking out the task identification information from the ready queue corresponding to the type of the delay task and putting the task identification information into the running queue corresponding to the type of the delay task.
In some embodiments of the present application, based on the foregoing scheme, the saving unit 1410 is configured to: taking the task identification information of the delay task as an element, taking the expiration time of the delay task as a score associated with the element, and correspondingly storing the element and the score associated with the element into an ordered set of a database; the scanning unit 1420 is configured to: and screening out elements with associated scores in a designated score interval in the ordered set of the database to obtain task identification information of the delayed task expired in the database, and placing the task identification information into a ready queue.
Fig. 15 shows a schematic diagram of a computer system suitable for use in implementing an embodiment of the application.
It should be noted that, the computer system 1500 of the electronic device shown in fig. 15 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 15, the computer system 1500 includes a central processing unit (Central Processing Unit, CPU) 1501, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1502 or a program loaded from a storage section 1508 into a random access Memory (Random Access Memory, RAM) 1503. In the RAM 1503, various programs and data required for the operation of the system are also stored. The CPU 1501, ROM 1502, and RAM 1503 are connected to each other through a bus 1504. An Input/Output (I/O) interface 1505 is also connected to bus 1504.
The following components are connected to I/O interface 1505: an input section 1506 including a keyboard, mouse, and the like; an output portion 1507 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and a speaker; a storage section 1508 including a hard disk and the like; and a communication section 1509 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 1509 performs communication processing via a network such as the internet. A drive 1510 is also connected to the I/O interface 1505 as needed. Removable media 1511, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 1510 as needed so that a computer program read therefrom is mounted into the storage section 1508 as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 1509, and/or installed from the removable medium 1511. When executed by a Central Processing Unit (CPU) 1501, performs the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As an aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
It will be appreciated that in the specific embodiments of the present application, where data relating to time-lapse tasks is involved, user approval or consent is required when the above embodiments of the present application are applied to specific products or technologies, and the collection, use and processing of the relevant data is required to comply with relevant laws and regulations and standards of the relevant countries and regions.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (13)

1. A method for implementing a delay queue, the method comprising:
after task information of a delay task submitted by a client is received, task identification information and expiration time of the delay task are stored in a database;
Scanning the database to put task identification information of the expired delay task in the database into a ready queue according to the expiration time;
taking out task identification information from the ready queue, putting the task identification information into an operation queue, and delivering task information corresponding to the task identification information to the client so that the client can process delay tasks corresponding to the task information;
and if a confirmation message corresponding to the task information is not received from the client, re-storing the task identification information corresponding to the task information in the running queue into the database so as to re-try to process the delay task corresponding to the task information, wherein the confirmation message is used for indicating that the client processes the corresponding delay task.
2. The method for implementing the delay queue according to claim 1, wherein said re-storing the task identification information corresponding to the task information in the running queue in the database to re-attempt to process the delay task corresponding to the task information includes:
and determining new expiration time according to the current time, and re-storing the task identification information corresponding to the task information and the new expiration time in the running queue in the database, so that when the confirmation message corresponding to the task information is not received from the client after the new expiration time is reached, the task identification information is re-fetched from the database and put in a ready queue to re-try to process the delay task corresponding to the task information.
3. The method for implementing the delay queue according to claim 1, wherein if the acknowledgement message corresponding to the task information is not received from the client, re-storing the task identification information corresponding to the task information in the run queue in the database comprises:
and if the confirmation message corresponding to the task information is not received from the client and the retry frequency of the task information does not reach a preset frequency threshold, the task identification information corresponding to the task information in the running queue is restored in the database.
4. A method of implementing a delay queue as recited in claim 3, wherein after retrieving task identification information from the ready queue and placing it in a run queue, the method further comprises:
if the confirmation message corresponding to the task information is received from the client, the task identification information corresponding to the task information is taken out from the running queue and discarded;
if the confirmation message corresponding to the task information is not received from the client and the retry number of the task information reaches a preset number threshold, the task identification information is put into a dead mail queue, and the dead mail queue is used for storing the task identification information of the delay task of which the client does not feed back the confirmation message when the retry number reaches the preset number threshold.
5. The method for implementing the delay queue according to claim 1, wherein storing the task identification information and the expiration time of the delay task in a database comprises:
and storing the task identification information and the expiration time of the delay task as bucket elements of the buckets in a bucket group into a database, so as to store the bucket group through the database, wherein the bucket group comprises a plurality of buckets.
6. The method of claim 5, wherein scanning the database to place task identification information of the expired delay task in the database into a ready queue according to the expiration time comprises:
and scanning the corresponding barrels in the barrel group through threads corresponding to each barrel, so as to put task identification information of the delayed task expired in each barrel into a ready queue according to the expiration time.
7. The method for implementing a delay queue according to claim 1, wherein before delivering the task information corresponding to the task identification information to the client, the method further comprises:
receiving a long polling request initiated by a client, and suspending the long polling request;
If no task identification information exists in the ready queue within a preset time period after the long polling request is received, returning an empty result to the client so that the client can reinitiate the long polling request after receiving the empty result;
the step of delivering the task information corresponding to the task identification information to the client includes:
and if the task identification information exists in the ready queue within a preset time period after the long polling request is received, the task information corresponding to the task identification information is transmitted to the client.
8. The method for implementing a delay queue according to any one of claims 1-7, wherein placing the task identification information of the expired delay task in the database into a ready queue according to the expiration time comprises:
according to the expiration time, task identification information of the expired delay task in the database is put into a ready queue corresponding to the type of the delay task;
the step of taking out the task identification information from the ready queue and putting the task identification information into an operation queue comprises the following steps:
and taking out the task identification information from the ready queue corresponding to the type of the delay task and putting the task identification information into the running queue corresponding to the type of the delay task.
9. The method for implementing a delay queue according to any one of claims 1-7, wherein storing task identification information and expiration time of the delay task in a database comprises:
taking the task identification information of the delay task as an element, taking the expiration time of the delay task as a score associated with the element, and correspondingly storing the element and the score associated with the element into an ordered set of a database;
the step of placing the task identification information of the expired delay task in the database into a ready queue according to the expiration time comprises the following steps:
and screening out elements with associated scores in a designated score interval in the ordered set of the database to obtain task identification information of the delayed task expired in the database, and placing the task identification information into a ready queue.
10. An implementation apparatus for a delay queue, wherein the apparatus includes:
the storage unit is used for storing the task identification information and the expiration time of the delay task into the database after receiving the task information of the delay task submitted by the client;
the scanning unit is used for scanning the database so as to put task identification information of the delayed task expired in the database into a ready queue according to the expiration time;
The extracting and delivering unit is used for taking out the task identification information from the ready queue, putting the task identification information into an operation queue, and delivering the task information corresponding to the task identification information to the client so that the client can process the delay task corresponding to the task information;
and the retry unit is used for re-storing the task identification information corresponding to the task information in the running queue into the database to retry processing the delay task corresponding to the task information if the confirmation message corresponding to the task information is not received from the client, wherein the confirmation message is used for indicating that the client processes the corresponding delay task.
11. A computer readable medium on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements a method of implementing a delay queue according to any one of claims 1 to 9.
12. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement a method of implementing a delay queue as claimed in any one of claims 1 to 9.
13. A computer program product, characterized in that it comprises computer instructions stored in a computer readable storage medium, from which computer instructions a processor of a computer device reads, the processor executing the computer instructions, causing the computer device to perform the method of implementing a delay queue according to any one of claims 1 to 9.
CN202211163927.8A 2022-09-23 2022-09-23 Method and device for realizing delay queue, computer readable medium and electronic equipment Pending CN116991599A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211163927.8A CN116991599A (en) 2022-09-23 2022-09-23 Method and device for realizing delay queue, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211163927.8A CN116991599A (en) 2022-09-23 2022-09-23 Method and device for realizing delay queue, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116991599A true CN116991599A (en) 2023-11-03

Family

ID=88532737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211163927.8A Pending CN116991599A (en) 2022-09-23 2022-09-23 Method and device for realizing delay queue, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116991599A (en)

Similar Documents

Publication Publication Date Title
CN108536532B (en) Batch task processing method and system
WO2016206600A1 (en) Information flow data processing method and device
CN110413384B (en) Delay task processing method and device, storage medium and electronic equipment
CN111124653A (en) Delayed message processing method, system, device and storage medium
CN111784329B (en) Service data processing method and device, storage medium and electronic device
CN115004673B (en) Message pushing method, device, electronic equipment and computer readable medium
CN105577772A (en) Material receiving method, material uploading method and device
CN108631955A (en) It is a kind of to ensure that message sends reachable mthods, systems and devices
CN111210340B (en) Automatic task processing method, device, server and storage medium
CN112272136A (en) Instant message notification method and computer storage medium
CN112825525B (en) Method and apparatus for processing transactions
CN112948081A (en) Method, device and equipment for processing task in delayed mode and storage medium
CN116991599A (en) Method and device for realizing delay queue, computer readable medium and electronic equipment
CN116701020A (en) Message delay processing method, device, equipment, medium and program product
CN114390452B (en) Message sending method and related equipment
CN116010065A (en) Distributed task scheduling method, device and equipment
CN115760317A (en) Business order processing method and device, computer equipment and storage medium
CN111416833A (en) Method and device for judging session termination
CN112445597B (en) Timing task scheduling method and device
CN113190624A (en) Asynchronous-to-synchronous calling method and device based on distributed cross-container
CN111582996B (en) Service information display method and device
CN114390104A (en) Process forensics system, method, apparatus, computer device and medium
CN113660380A (en) Information processing method and device
CN112395081B (en) Online automatic resource recycling method, system, server and storage medium
CN113590715A (en) Block chain-based information push method, apparatus, device, medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination