CN116701020A - Message delay processing method, device, equipment, medium and program product - Google Patents

Message delay processing method, device, equipment, medium and program product Download PDF

Info

Publication number
CN116701020A
CN116701020A CN202310675350.7A CN202310675350A CN116701020A CN 116701020 A CN116701020 A CN 116701020A CN 202310675350 A CN202310675350 A CN 202310675350A CN 116701020 A CN116701020 A CN 116701020A
Authority
CN
China
Prior art keywords
queue
message
detail data
delay
message identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310675350.7A
Other languages
Chinese (zh)
Inventor
李菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202310675350.7A priority Critical patent/CN116701020A/en
Publication of CN116701020A publication Critical patent/CN116701020A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure provides a message delay processing method, which can be applied to the technical field of big data. The message delay processing method comprises the following steps: receiving a delay message from an application program, wherein the delay message comprises a message identifier and detail data, and the message identifier corresponds to a delay time stamp; storing the message identification of the delay message into a first queue; storing the detail data of the delay message into a second queue; triggering an expiration event of the delay time stamp, pulling a message identifier in the first queue and storing the message identifier in a third queue; deleting the message identification in the first queue; acquiring detail data in a second queue based on the message identification in the third queue, wherein the detail data is consumed by a consumer; and deleting the detail data in the second queue and the message identification in the third queue under the condition that the consumer consumes the detail data successfully. The present disclosure also provides a message delay processing apparatus, device, storage medium and program product.

Description

Message delay processing method, device, equipment, medium and program product
Technical Field
The present disclosure relates to the field of big data technologies, and in particular, to a method, an apparatus, a device, a medium, and a program product for message delay processing.
Background
Currently, the current delayed message queues in the market are implemented through Redison distributed locks or message middleware.
However, in the actual product development process, when a delay message queue is implemented by adopting a rabitmq-delayed-message-exchange plug-in a message middleware rabitmq, an overweight third party framework is introduced, and broadcast consumption is caused; likewise, redison distributed locks are not typically added directly to items.
Therefore, how to implement a lightweight delayed message queue is important.
Disclosure of Invention
In view of the foregoing, the present disclosure provides message delay processing methods, apparatus, devices, media, and program products that implement lightweight delay message queues.
According to a first aspect of the present disclosure, there is provided a message delay processing method, including: receiving a delay message from an application program, wherein the delay message comprises a message identifier and detail data, and the message identifier corresponds to a delay time stamp; storing the message identification of the delay message into a first queue; storing the detail data of the delay message into a second queue; triggering an expiration event of the delay time stamp, pulling a message identifier in the first queue, and storing the message identifier in a third queue; deleting the message identification in the first queue; acquiring the detail data in the second queue based on the message identification in the third queue, wherein the detail data is consumed by a consumer; and deleting the detail data in the second queue and the message identification in the third queue under the condition that the consumer successfully consumes the detail data.
According to an embodiment of the present disclosure, after the acquiring the detail data in the second queue based on the message identifier in the third queue, the detail data is further included after being consumed by a consumer: pulling the message identification in the third queue to store in a first queue under the condition that the consumer fails to consume the detail data; and deleting the message identification in the third queue.
According to an embodiment of the present disclosure, the method further comprises: checking whether a delay time stamp corresponding to the message identifier in the third queue exceeds an overtime threshold according to a preset time period; acquiring the detail data in the second queue based on the message identification in the third queue under the condition that the delay time stamp exceeds the expiration threshold, wherein the detail data is consumed by a consumer; and deleting the detail data in the second queue and the message identification in the third queue under the condition that the consumer successfully consumes the detail data.
According to an embodiment of the present disclosure, the second queue is established based on a Hash structure, the Hash structure is formed by a Key-Value pair, the Key-Value pair includes a Hash Key and a Value, the Hash Key stores a message identifier, the Value of the second queue stores detail data, and the detail data in the second queue is acquired based on the message identifier in the third queue, where the detail data is consumed by a consumer, and includes: querying the Hash Key of the second queue based on the message identification query; and determining storage detail data in the corresponding Value based on the Hash Key.
According to an embodiment of the present disclosure, the pulling the message identifier in the first queue is stored in a third queue, and the deleting the message identifier in the first queue is an atomic operation; and the pulling of the message identification in the third queue to the first queue and the deleting of the message identification in the third queue are atomic operations.
According to the embodiment of the disclosure, the first queue, the second queue and the third queue are uniformly maintained by a preset number of threads; or the first queue, the second queue and the third queue are respectively maintained by the threads corresponding to the first queue, the second queue and the third queue one by one.
According to an embodiment of the present disclosure, the first queue is established based on an ordered set of structures, and the third queue is established based on a normal set of structures.
In a second aspect of the present disclosure, there is provided a message delay processing apparatus, including: the receiving module is used for receiving a delay message from an application program, wherein the delay message comprises a message identifier and detail data, and the message identifier corresponds to a delay time stamp; the first storing module is used for storing the message identification of the delay message into a first queue; the second storing module is used for storing the detail data of the delay message into a second queue; the pulling module is used for triggering an expiration event of the delay time stamp, and pulling the message identifier in the first queue to be stored in a third queue; the deleting module is used for deleting the message identification in the first queue; a detail data acquisition module, configured to acquire the detail data in the second queue based on a message identifier in the third queue, where the detail data is consumed by a consumer; and the ending module is used for deleting the detail data in the second queue and the message identification in the third queue under the condition that the consumer successfully consumes the detail data.
According to an embodiment of the disclosure, the apparatus further comprises: the rollback module is used for pulling the message identification in the third queue and storing the message identification in the first queue under the condition that the consumer fails to consume the detail data; and deleting the message identification in the third queue.
According to an embodiment of the disclosure, the apparatus further comprises: the periodic detection module is used for checking whether the delay time stamp corresponding to the message identifier in the third queue exceeds an out-of-period threshold according to a preset time period; acquiring the detail data in the second queue based on the message identification in the third queue under the condition that the delay time stamp exceeds the expiration threshold, wherein the detail data is consumed by a consumer; and deleting the detail data in the second queue and the message identification in the third queue under the condition that the consumer successfully consumes the detail data.
According to an embodiment of the disclosure, the second queue is established based on a Hash structure, the Hash structure is formed by a Key-Value pair, the Key-Value pair includes a Hash Key and a Value, a message identifier is stored in the Hash Key, detail data is stored in the Value of the second queue, and the detail data acquisition module is used for querying the Hash Key of the second queue based on the message identifier; and determining storage detail data in the corresponding Value based on the Hash Key.
According to an embodiment of the present disclosure, the pulling the message identifier in the first queue is stored in a third queue, and the deleting the message identifier in the first queue is an atomic operation; and the pulling of the message identification in the third queue to the first queue and the deleting of the message identification in the third queue are atomic operations.
According to the embodiment of the disclosure, the first queue, the second queue and the third queue are uniformly maintained by a preset number of threads; or the first queue, the second queue and the third queue are respectively maintained by the threads corresponding to the first queue, the second queue and the third queue one by one.
According to an embodiment of the present disclosure, the first queue is established based on an ordered set of structures, and the third queue is established based on a normal set of structures.
In a third aspect of the present disclosure, there is provided an electronic device, comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the message delay processing method described above.
In a fourth aspect of the present disclosure, there is also provided a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above-described message delay processing method.
In a fifth aspect of the present disclosure, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the above-described message delay processing method.
In the embodiment of the disclosure, the method of respectively storing the identifications of the delay data to the first queue and the third queue according to different stages and respectively storing the message identifications ensures that the first queue of the direct interfacing service is less affected, and is beneficial to the proceeding of the existing service; the identification and the data are separated, so that the delay processing method can cope with larger processing capacity; further, by setting three queues, the fault tolerance of the scheme can be increased. And compared with a third party framework in the prior art, the message delay processing method of the embodiment of the disclosure is lighter, simpler and more easy to use, more reliable and free from losing data.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be more apparent from the following description of embodiments of the disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an application scenario diagram of a message delay processing method according to an embodiment of the present disclosure;
fig. 2 schematically illustrates a flow chart of a message delay processing method according to an embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of another message delay processing method in accordance with an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of another message delay processing method in accordance with an embodiment of the present disclosure;
FIG. 5 schematically illustrates a full flow diagram of a message delay processing method according to an embodiment of the disclosure;
fig. 6 schematically illustrates a block diagram of a message delay processing apparatus according to an embodiment of the present disclosure; and
fig. 7 schematically illustrates a block diagram of an electronic device adapted to implement a message delay processing method according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
To solve the technical problems in the prior art, an embodiment of the present disclosure provides a message delay processing method, including: receiving a delay message from an application program, wherein the delay message comprises a message identifier and detail data, and the message identifier corresponds to a delay time stamp; storing the message identification of the delay message into a first queue; storing the detail data of the delay message into a second queue; triggering an expiration event of the delay time stamp, pulling a message identifier in the first queue, and storing the message identifier in a third queue; deleting the message identification in the first queue; acquiring the detail data in the second queue based on the message identification in the third queue, wherein the detail data is consumed by a consumer; and deleting the detail data in the second queue and the message identification in the third queue under the condition that the consumer successfully consumes the detail data.
In the embodiment of the disclosure, the method of respectively storing the identifications of the delay data to the first queue and the third queue according to different stages and respectively storing the message identifications ensures that the first queue of the direct interfacing service is less affected, and is beneficial to the proceeding of the existing service; the identification and the data are separated, so that the delay processing method can cope with larger processing capacity; further, by setting three queues, the fault tolerance of the scheme can be increased. And compared with a third party framework in the prior art, the message delay processing method of the embodiment of the disclosure is lighter, simpler and more easy to use, more reliable and free from losing data.
Fig. 1 schematically illustrates an application scenario diagram of a message delay processing method according to an embodiment of the present disclosure.
As shown in fig. 1, an application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the message delay processing method provided in the embodiments of the present disclosure may be generally executed by the server 105. Accordingly, the message delay processing apparatus provided in the embodiments of the present disclosure may be generally disposed in the server 105. The message delay processing method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the message delay processing apparatus provided by the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The message delay processing method of the disclosed embodiment will be described in detail below with reference to fig. 2 to 5 based on the scenario described in fig. 1.
Fig. 2 schematically illustrates a flow chart of a message delay processing method according to an embodiment of the disclosure.
As shown in fig. 2, the message delay processing method of this embodiment includes operations S210 to S270, and the message delay processing method may be performed by the server 105.
According to an embodiment of the present disclosure, the first queue is established based on an ordered set of structures, and the third queue is established based on a normal set of structures.
According to an embodiment of the present disclosure, the second queue is established based on a hash structure.
Specifically, the three queues used for the items in the embodiments of the present disclosure are implemented based on three types of data structures of redis, where the first queue uses an ordered set structure (hereafter zset), the second queue uses a hashed structure (hereafter hash), and the third queue uses a normal set structure, where the data structures zset and set are similar, except that a delay timestamp (which is stored in the score) may be included in the data structure zset, and no delay timestamp (which is stored in the score) is present in the data structure set. It will be appreciated that, according to the naming rule of the function, the first queue, the second queue and the third queue may be referred to as: a data queue zset, a data detail queue hash, and a data backup queue set.
Further, the delay message just pushed in is stored in the first queue zset, wherein the Key is stone: [ Module ]: { [ Topic ] } job_index, value is the message identifier of each delay message, the message identifier is not repeatable, and Score is the timestamp of the proper execution of a delay message; the key of the second queue hash is stone: [ Module ]: { [ Topic ] } job_delay, hashkey is the message identity of each delay message, which is also a non-repeatable ID, value is the detail data of the delay message. Key in the data backup queue set is stone: [ Module ]: { [ Topic ] } Back_up, value is the message identity of each delay message, which is also not repeatable.
It should be noted that, compared with the conventional List structure, the structure of the ordered set and the structure of the normal set do not need to follow the principle of first in first out, and can be automatically extracted from the queues, so that the first and third queues have a certain degree of freedom, are more suitable for the scene of the delay queues, and the queue structure is suitable for rollback operation under the condition of consumption failure.
In operation S210, a delay message from an application is received, the delay message including a message identification and detail data, the message identification corresponding to a delay time stamp.
In operation S220, a message identification of the delayed message is stored in a first queue.
In operation S230, the detail data of the delay message is stored in the second queue.
Specifically, the application sends delay messages, and the delay messages include time stamps for executing delay, for example, a delay time stamp in a delay message identifies that the message is consumed after 10 minutes, and then the delay time stamp corresponds to 10 minutes. And the message identification is stored in the first queue, and the detail data corresponding to the message identification is stored in the second queue. It should be noted that only the score in the first queue is used to store the delay time stamp, and no delay time stamp is stored in the second queue, and no score is used to store the time stamp.
In operation S240, an expiration event of the delay time stamp is triggered, and the message identifier in the first queue is pulled and stored in a third queue.
In operation S250, the message identification in the first queue is deleted.
Specifically, a thread is preset in the application program (or component), and the logic of the thread is to continuously backup the expired message identifier pop from the first queue zset to the third queue set, and delete the original message identifier in the first queue zset at the same time, so as to avoid repeated consumption.
According to the embodiment of the disclosure, the first queue, the second queue and the third queue are uniformly maintained by a preset number of threads; or the first queue, the second queue and the third queue are respectively maintained by the threads corresponding to the first queue, the second queue and the third queue one by one.
Specifically, the first queue, the second queue and the third queue are types of queue functions, so that in a message delay processing architecture, a plurality of first queues, a plurality of second queues and a plurality of third queues can be used. Further, maintaining these queues may be setting the respective threads according to how many queues or messages (tasks) are. For example, 1. One or more (fewer) threads maintain all queues, in the case of a relatively small number of queues, contention for resources may be avoided; 2. one thread maintains a queue, and can realize the distribution of the messages under the condition of large message number.
In operation S260, the detail data in the second queue is obtained based on the message identification in the third queue, wherein the detail data is consumed by a consumer.
According to an embodiment of the present disclosure, the second queue is established based on a Hash structure, the Hash structure is formed by a Key-Value pair, the Key-Value pair includes a Hash Key and a Value, the Hash Key stores a message identifier, the Value of the second queue stores detail data, and the detail data in the second queue is acquired based on the message identifier in the third queue, where the detail data is consumed by a consumer, and includes: querying the Hash Key of the second queue based on the message identification query; and determining storage detail data in the corresponding Value based on the Hash Key.
Specifically, after the thread transfers the data to the third queue set, the data is returned, the returned content is the set of the delay message identifiers, after the delay message identifiers are received in the program, the corresponding delay message details are obtained from the data detail queue hash through the IDs, and then the corresponding delay message details are transmitted to the corresponding message processor/consumer for consumption.
In operation S270, in case that the consumer succeeds in consuming the detail data, the detail data in the second queue and the message identification in the third queue are deleted.
Specifically, after the message processing is successful, the component commit, that is, deletes the corresponding records or data in the second queue and the third queue.
In the embodiment of the disclosure, the method of respectively storing the identifications of the delay data to the first queue and the third queue according to different stages and respectively storing the message identifications ensures that the first queue of the direct interfacing service is less affected, and is beneficial to the proceeding of the existing service; the identification and the data are separated, so that the delay processing method can cope with larger processing capacity; further, by setting three queues, the fault tolerance of the scheme can be increased. And compared with a third party framework in the prior art, the message delay processing method of the embodiment of the disclosure is lighter, simpler and more easy to use, more reliable and free from losing data.
Fig. 3 schematically illustrates a flow chart of another message delay processing method according to an embodiment of the disclosure.
As shown in fig. 3, the message delay processing method of this embodiment includes operations S310 to S320 in addition to operations S210 to S270, and operations S310 to S320 are performed after operation S260.
In operation S310, in a case that the consumer fails to consume the detail data, the message identifier in the third queue is pulled and stored in the first queue.
In operation S320, the message identification in the third queue is deleted.
Specifically, when the message processing fails, the Rollback transfers the message ID of the third queue set to the first queue zset, and deletes the message from the third queue for backup. That is, when an unknown error occurs, the message identifier in the third queue can be stored in the first queue again, so that effective rollback is realized, and only small influence is caused on the existing service.
The operations S240 to S250 are atomic operations, and the operations S310 to S320 are atomic operations as well. Because, when the two operations are atomic operations, under the condition that one operation fails, the state before the original operation is returned, and the condition that the new queue is written into the old queue and is not deleted is avoided; or avoids the condition that the new queue is not written in and the old queue is deleted, and ensures that the service is normally carried out.
Fig. 4 schematically illustrates a flow chart of another message delay processing method according to an embodiment of the disclosure.
As shown in fig. 4, the message delay processing method of this embodiment further includes operations S410 to S430.
In operation S410, according to a preset time period, it is checked whether the delay time stamp corresponding to the message identifier in the third queue exceeds an expiration threshold.
In operation S420, in a case where the delay time stamp exceeds the expiration threshold, the detail data in the second queue is acquired based on the message identification in the third queue, wherein the detail data is consumed by a consumer.
In operation S430, in case that the consumer succeeds in consuming the detail data, the detail data in the second queue and the message identification in the third queue are deleted.
Specifically, after the operations S240 to S250 are performed, a delay time stamp of the record is uploaded, that is, since the time stamp is not stored in the third queue set, the delay time stamp and the message identifier are uploaded to another place, and in the process of periodic checking, whether there is an unprocessed message identifier in the third queue is periodically checked through the delay time stamp. And re-consuming the commit message based on the step of re-walking the above-described operations S410 to S430 based on the data backed up in the third queue, if it is detected.
It should be noted that in some extreme cases, such as when a piece of data is pushed out from the first queue and transferred to the third queue, a consumed message (detail data) is submitted but not ended, for example, in the case that the server is down, the message is not consumed and is always present in the third queue. At this time, in order to solve the problem that the message is not processed and occupies the third queue space for a long time, a logic is provided for periodically scanning whether unprocessed messages exist in the third queue space, so that the message is ensured to be processed in time and the space of the third queue is not meaningfully occupied.
Fig. 5 schematically illustrates a full flow chart of a message delay processing method according to an embodiment of the disclosure.
As shown in fig. 5, a data queue zset, a data detail queue hash, and a data backup queue set are provided. First the message is pushed into zset, then the background thread will transfer the message in zset to the hash structure with the arrival delay time, where the unique id of the message is the hash key, the message is the value, and delete the message in zset to prevent consumption by multiple consuming instances and put the message back to the program. This step guarantees atomicity using lua script (script carried by itself in redis). And consuming the delay message in the program, if successful, deleting the message in the hash by the commit, and if unsuccessful, re-finding the message by the rollback, and transferring the message from the hash to zset.
It should be explained that, for the queue setting logic in the embodiment of the present disclosure, the developer is as follows:
1. assuming that only one data queue is used, the delayed messages are directly stored in one zset queue or list queue, if the data queue is of a zset structure, a plurality of examples can rob to the delayed messages which are already out of date in the zset, if the data queue is of a list structure, the first list structure can not continuously pop out the messages and then judge whether the messages are out of date, so that the system consumption is too large. Although it is possible to avoid multiple instances robbing the same delayed message that has expired, there is a possibility of losing data, such as when an instance acquires the delayed message that has expired, the message is being consumed, but the consumption fails. At this point, although the list queue may be pushed back in, the entire message may be lost if the instance is now in a state of being closed. If it is a batch consumer, then there are more messages lost.
2. Assuming that two queues are used, the first queue is structured as zset because the store of zset is able to store the message queues because the process messages need to be acquired by message expiration time. At this point zset's score is the delay timestamp, value is the delay message, and the message needs to have a unique id. The second queue may choose list, set, hash because the delayed message is found by the message id when the convenient consumption fails and then pushed back into the first queue. So the second queue structure, the developer chooses the hash.
In summary, the aggregate structure selected by the research personnel is zset+hash.
Based on the message delay processing method, the disclosure also provides a message delay processing device. The device will be described in detail below in connection with fig. 6.
Fig. 6 schematically shows a block diagram of a message delay processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the message delay processing apparatus 600 of this embodiment includes a receiving module 610, a first storing module 620, a second storing module 630, a pulling module 640, a deleting module 650, a detailed data acquiring module 660, and an ending module 670.
The receiving module 610 is configured to receive a delayed message from an application program, where the delayed message includes a message identifier and detailed data, and the message identifier corresponds to a delay timestamp. In an embodiment, the receiving module 610 may be configured to perform the operation S210 described above, which is not described herein.
The first storing module 620 is configured to store the message identifier of the delayed message in the first queue. In an embodiment, the first logging module 620 may be used to perform the operation S220 described above, which is not described herein.
The second storing module 630 is configured to store the detail data of the delay message in a second queue. In an embodiment, the second logging module 630 may be used to perform the operation S230 described above, which is not described herein.
The pulling module 640 is configured to trigger an expiration event of the delay time stamp, and pull the message identifier in the first queue to store in a third queue. In an embodiment, the pulling module 640 may be used to perform the operation S240 described above, which is not described herein.
The deleting module 650 is configured to delete the message identifier in the first queue. In an embodiment, the deleting module 650 may be configured to perform the operation S250 described above, which is not described herein.
A detail data acquisition module 660 is configured to acquire the detail data in the second queue based on the message identifier in the third queue, where the detail data is consumed by a consumer. In an embodiment, the detail data obtaining module 660 may be configured to perform the operation S260 described above, which is not described herein.
The ending module 670 is configured to delete the detail data in the second queue and the message identifier in the third queue if the consumer succeeds in consuming the detail data. In an embodiment, the ending module 670 may be configured to perform the operation S270 described above, which is not described herein.
In the embodiment of the disclosure, the method of respectively storing the identifications of the delay data to the first queue and the third queue according to different stages and respectively storing the message identifications ensures that the first queue of the direct interfacing service is less affected, and is beneficial to the proceeding of the existing service; the identification and the data are separated, so that the delay processing method can cope with larger processing capacity; further, by setting three queues, the fault tolerance of the scheme can be increased. And compared with a third party framework in the prior art, the message delay processing method of the embodiment of the disclosure is lighter, simpler and more easy to use, more reliable and free from losing data.
According to an embodiment of the disclosure, the apparatus further comprises: the rollback module is used for pulling the message identification in the third queue and storing the message identification in the first queue under the condition that the consumer fails to consume the detail data; and deleting the message identification in the third queue.
According to an embodiment of the disclosure, the apparatus further comprises: the periodic detection module is used for checking whether the delay time stamp corresponding to the message identifier in the third queue exceeds an out-of-period threshold according to a preset time period; acquiring the detail data in the second queue based on the message identification in the third queue under the condition that the delay time stamp exceeds the expiration threshold, wherein the detail data is consumed by a consumer; and deleting the detail data in the second queue and the message identification in the third queue under the condition that the consumer successfully consumes the detail data.
According to an embodiment of the disclosure, the second queue is established based on a Hash structure, the Hash structure is formed by a Key-Value pair, the Key-Value pair includes a Hash Key and a Value, a message identifier is stored in the Hash Key, detail data is stored in the Value of the second queue, and the detail data acquisition module is used for querying the Hash Key of the second queue based on the message identifier; and determining storage detail data in the corresponding Value based on the Hash Key.
According to an embodiment of the present disclosure, the pulling the message identifier in the first queue is stored in a third queue, and the deleting the message identifier in the first queue is an atomic operation; and the pulling of the message identification in the third queue to the first queue and the deleting of the message identification in the third queue are atomic operations.
According to the embodiment of the disclosure, the first queue, the second queue and the third queue are uniformly maintained by a preset number of threads; or the first queue, the second queue and the third queue are respectively maintained by the threads corresponding to the first queue, the second queue and the third queue one by one.
According to an embodiment of the present disclosure, the first queue is established based on an ordered set of structures, and the third queue is established based on a normal set of structures.
Any of the receiving module 610, the first logging module 620, the second logging module 630, the pulling module 640, the deleting module 650, the detail data obtaining module 660, and the ending module 670 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules, according to embodiments of the present disclosure. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the present disclosure, at least one of the receiving module 610, the first logging module 620, the second logging module 630, the pulling module 640, the deleting module 650, the detail data obtaining module 660, and the ending module 670 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in hardware or firmware in any other reasonable manner of integrating or packaging the circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, at least one of the receiving module 610, the first storing module 620, the second storing module 630, the pulling module 640, the deleting module 650, the detail data obtaining module 660, and the ending module 670 may be at least partially implemented as a computer program module, which may perform the corresponding functions when being executed.
Fig. 7 schematically illustrates a block diagram of an electronic device adapted to implement a message delay processing method according to an embodiment of the disclosure.
As shown in fig. 7, an electronic device 700 according to an embodiment of the present disclosure includes a processor 701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. The processor 701 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 702 and/or the RAM 703. Note that the program may be stored in one or more memories other than the ROM 702 and the RAM 703. The processor 701 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 700 may further include an input/output (I/O) interface 705, the input/output (I/O) interface 705 also being connected to the bus 704. The electronic device 700 may also include one or more of the following components connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 702 and/or RAM 703 and/or one or more memories other than ROM 702 and RAM 703 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. The program code, when executed in a computer system, causes the computer system to perform the methods provided by embodiments of the present disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 701. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed over a network medium in the form of signals, downloaded and installed via the communication section 709, and/or installed from the removable medium 711. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 701. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (11)

1. A message delay processing method, comprising:
receiving a delay message from an application program, wherein the delay message comprises a message identifier and detail data, and the message identifier corresponds to a delay time stamp;
Storing the message identification of the delay message into a first queue;
storing the detail data of the delay message into a second queue;
triggering an expiration event of the delay time stamp, pulling a message identifier in the first queue, and storing the message identifier in a third queue;
deleting the message identification in the first queue;
acquiring the detail data in the second queue based on the message identification in the third queue, wherein the detail data is consumed by a consumer; and
and deleting the detail data in the second queue and the message identification in the third queue under the condition that the consumer successfully consumes the detail data.
2. The method of claim 1, wherein the obtaining the detail data in the second queue based on the message identification in the third queue, wherein the detail data, after being consumed by a consumer, further comprises:
pulling the message identification in the third queue to store in a first queue under the condition that the consumer fails to consume the detail data; and
and deleting the message identification in the third queue.
3. The method according to claim 1 or 2, wherein the method further comprises:
Checking whether a delay time stamp corresponding to the message identifier in the third queue exceeds an overtime threshold according to a preset time period;
acquiring the detail data in the second queue based on the message identification in the third queue under the condition that the delay time stamp exceeds the expiration threshold, wherein the detail data is consumed by a consumer; and
and deleting the detail data in the second queue and the message identification in the third queue under the condition that the consumer successfully consumes the detail data.
4. The method of claim 1, wherein the second queue is established based on a Hash structure formed from a Key-Value pair comprising a Hash Key in which a message identification is stored and a Value in which detail data is stored,
the obtaining the detail data in the second queue based on the message identification in the third queue, wherein the detail data is consumed by a consumer, comprises:
querying the Hash Key of the second queue based on the message identification query; and
and determining storage detail data in the corresponding Value based on the Hash Key.
5. The method of claim 2, wherein,
the message identification in the first queue is pulled and stored in a third queue, and the message identification in the first queue is deleted, which is an atomic operation; and
and the pulling of the message identification in the third queue to be stored in the first queue, and the deleting of the message identification in the third queue are atomic operations.
6. The method of any one of claims 1, 2, 4 and 5, wherein,
the first queue, the second queue and the third queue are uniformly maintained by threads with preset numbers; or alternatively
The first queue, the second queue and the third queue are respectively maintained by the threads corresponding to the first queue, the second queue and the third queue one by one.
7. The method of any one of claims 1, 2, 4 and 5, wherein,
the first queue is established based on the structure of the ordered set, and the third queue is established based on the structure of the normal set.
8. A message delay processing apparatus comprising:
the receiving module is used for receiving a delay message from an application program, wherein the delay message comprises a message identifier and detail data, and the message identifier corresponds to a delay time stamp;
the first storing module is used for storing the message identification of the delay message into a first queue;
The second storing module is used for storing the detail data of the delay message into a second queue;
the pulling module is used for triggering an expiration event of the delay time stamp, and pulling the message identifier in the first queue to be stored in a third queue;
the deleting module is used for deleting the message identification in the first queue;
a detail data acquisition module, configured to acquire the detail data in the second queue based on a message identifier in the third queue, where the detail data is consumed by a consumer; and
and the ending module is used for deleting the detail data in the second queue and the message identification in the third queue under the condition that the consumer successfully consumes the detail data.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method according to any of claims 1-7.
11. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 7.
CN202310675350.7A 2023-06-08 2023-06-08 Message delay processing method, device, equipment, medium and program product Pending CN116701020A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310675350.7A CN116701020A (en) 2023-06-08 2023-06-08 Message delay processing method, device, equipment, medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310675350.7A CN116701020A (en) 2023-06-08 2023-06-08 Message delay processing method, device, equipment, medium and program product

Publications (1)

Publication Number Publication Date
CN116701020A true CN116701020A (en) 2023-09-05

Family

ID=87828691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310675350.7A Pending CN116701020A (en) 2023-06-08 2023-06-08 Message delay processing method, device, equipment, medium and program product

Country Status (1)

Country Link
CN (1) CN116701020A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117194549A (en) * 2023-11-07 2023-12-08 上海柯林布瑞信息技术有限公司 Data transmission method and device based on task data configuration

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117194549A (en) * 2023-11-07 2023-12-08 上海柯林布瑞信息技术有限公司 Data transmission method and device based on task data configuration
CN117194549B (en) * 2023-11-07 2024-01-26 上海柯林布瑞信息技术有限公司 Data transmission method and device based on task data configuration

Similar Documents

Publication Publication Date Title
US10891175B2 (en) System having in-memory buffer service, temporary events file storage system and events file uploader service
WO2021180025A1 (en) Message processing method and apparatus, electronic device and medium
CN107729139B (en) Method and device for concurrently acquiring resources
US10831776B2 (en) On-demand file synchronization
CN113254466B (en) Data processing method and device, electronic equipment and storage medium
US9940598B2 (en) Apparatus and method for controlling execution workflows
US20200201837A1 (en) Live record invalidation
CN109901918B (en) Method and device for processing overtime task
CN116701020A (en) Message delay processing method, device, equipment, medium and program product
CN111478781A (en) Message broadcasting method and device
CN112181948A (en) Database operation statement processing method and device, electronic equipment and medium
CN116431290A (en) Job scheduling method, apparatus, device, medium and program product
CN113824675B (en) Method and device for managing login state
CN114168607A (en) Global serial number generation method, device, equipment, medium and product
CN112235332B (en) Method and device for switching reading and writing of clusters
CN113742355A (en) Method, device, equipment and computer readable medium for updating inventory
CN112699116A (en) Data processing method and system
CN111176959A (en) Early warning method, system and storage medium for cross-domain application server
CN114915659B (en) Network request processing method and device, electronic equipment and storage medium
CN115550226A (en) Flow monitoring method and device of distributed system
CN117827874A (en) Data query method and electronic equipment
CN116582594A (en) Method, apparatus, electronic device and computer readable medium for accessing service
CN117082149A (en) Transaction processing method, device, electronic equipment and computer readable medium
CN116167835A (en) Service processing method, device, electronic equipment and computer readable medium
CN116450622A (en) Method, apparatus, device and computer readable medium for data warehouse entry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination