CN114885018B - Message pushing method, device, equipment and storage medium based on double queues - Google Patents

Message pushing method, device, equipment and storage medium based on double queues Download PDF

Info

Publication number
CN114885018B
CN114885018B CN202210708982.4A CN202210708982A CN114885018B CN 114885018 B CN114885018 B CN 114885018B CN 202210708982 A CN202210708982 A CN 202210708982A CN 114885018 B CN114885018 B CN 114885018B
Authority
CN
China
Prior art keywords
message
queue
messages
information
message queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210708982.4A
Other languages
Chinese (zh)
Other versions
CN114885018A (en
Inventor
庄志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202210708982.4A priority Critical patent/CN114885018B/en
Publication of CN114885018A publication Critical patent/CN114885018A/en
Application granted granted Critical
Publication of CN114885018B publication Critical patent/CN114885018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6235Variable service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention belongs to the technical field of computers, and discloses a message pushing method, device and equipment based on double queues and a storage medium. The method comprises the following steps: acquiring queue information in real time and sequentially placing the queue information into a first information queue; accumulating the first message quantity in the first message queue, and judging whether the flow control limit condition is reached according to the first message quantity; if yes, transferring the queue information exceeding the flow control limit condition from the first information queue to the second information queue; calculating a second message number of each message type in a second message queue at intervals of a preset duration, comparing the second message number with a first blocking threshold value, and judging whether the second message number is larger than or equal to the first blocking threshold value; if yes, triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue message, transferring the queue message to the second message queue according to the classification result and consuming the message. Through the mode, the message blocking can be effectively reduced.

Description

Message pushing method, device, equipment and storage medium based on double queues
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for pushing a message based on dual queues.
Background
The message pushing generally carries out an asynchronous pushing scheme, and the asynchronous pushing can effectively solve the parallel asynchronous sending of pushing on the basis of not affecting service logic, and cannot block the service flow. The asynchronous message pushing scheme generally uses a message queue to perform asynchronous pushing, namely a service system performs logic triggering of the message, when the message is recorded in a table, a producer sends a queue message to the message queue, a consumer of a message consumption center consumes according to the message sequence in the queue, and a third party system is called to perform actual message pushing. However, if the message concurrent pushing amount is large, or the message consumption is slow, or the message is blocked for a period of time, the important message consumption is not timely, the message is delayed, and the important message reaches slowly to affect the service of the user.
Disclosure of Invention
The invention provides a message pushing method, device, equipment and storage medium based on double queues, which can effectively reduce message blocking, dynamically adjust message consumption sequence and ensure that important messages reach users in time.
In order to solve the technical problems, the invention adopts a technical scheme that: the message pushing method based on the double queues comprises the following steps:
acquiring queue information in real time and storing the queue information in a first information queue in sequence;
accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches a flow control limit condition according to the first message quantity;
if yes, transferring the queue message exceeding the flow control limit condition from the first message queue to a second message queue;
calculating a second message number of each message type in the second message queue at intervals of preset duration, comparing the second message number with a first blocking threshold value, and judging whether the second message number is larger than or equal to the first blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue message, transferring the queue message to the second message queue according to the classification result, and consuming the message in the second message queue.
According to one embodiment of the present invention, when the message reordering mechanism transfers the queue messages from the first message queue to the second message queue, the urgency of the queue messages is identified, the queue messages with the urgency greater than a first preset value are preferentially transferred to the second message queue, and the queue messages with the urgency less than or equal to the first preset value are marked and returned to the first message queue.
According to an embodiment of the present invention, after determining whether the first message queue reaches the flow control limit condition according to the first message number, the method further includes:
and if the first message queue does not reach the flow control limiting condition, message consumption is carried out in the first message queue according to a first-in first-out mechanism.
According to one embodiment of the present invention, after comparing the second message number with the first blocking threshold, determining whether the second message number is greater than or equal to the first blocking threshold further includes:
if not, message consumption is carried out in the first message queue and the second message queue according to a first-in first-out mechanism respectively.
According to an embodiment of the present invention, after the calculating the second message number of each message type in the second message queue, the interval preset duration further includes:
comparing the second message number with a second blocking threshold value, and judging that the second message number is larger than or equal to the second blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue message, transferring the queue message to the second message queue according to the classification result, and triggering the first message queue and/or the second message queue to execute a message discarding strategy.
According to an embodiment of the present invention, after the obtaining the queue message in real time, before the sequentially depositing the queue message in the first message queue, the method further includes:
and acquiring the message type of the queue message, and executing a message discarding strategy on the queue message with the message type of non-urgent type.
According to one embodiment of the present invention, before the obtaining the queue message in real time and depositing the queue message in the first message queue in sequence, the method further includes:
acquiring a message sent by a message producer and identifying a template ID of the message;
and acquiring a corresponding message template according to the template ID, and reassembling the message according to the message template to obtain the queue message, wherein the queue message comprises the urgency, the message type and the message accommodating and stacking quantity.
In order to solve the technical problems, the invention adopts another technical scheme that: provided is a message pushing device based on double queues, comprising:
the acquisition module is used for acquiring the queue information in real time and storing the queue information into a first information queue in sequence;
the first judging module is used for accumulating the first message quantity in the first message queue and judging whether the first message queue reaches the flow control limiting condition according to the first message quantity;
the first execution module is used for transferring the queue message exceeding the flow control limit condition from the first message queue to a second message queue if the flow control limit condition is met;
the second judging module is used for calculating the second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with a first blocking threshold value and judging whether the second message quantity is larger than or equal to the first blocking threshold value;
and the second execution module is used for triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue message, transferring the queue message to the second message queue according to the classification result, and consuming the message in the second message queue.
In order to solve the technical problems, the invention adopts a further technical scheme that: there is provided a computer device comprising: the message pushing system comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the message pushing method based on the double queues when executing the computer program.
In order to solve the technical problems, the invention adopts a further technical scheme that: there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described dual queue based message pushing method.
The beneficial effects of the invention are as follows: through the double-message queue setting, when the number of the second messages in the second message queue is greater than or equal to the first blocking threshold value, the first message queue can be triggered to execute a message rearrangement mechanism, the message consumption sequence is dynamically adjusted, the queue messages with higher urgency are preferentially transferred to the second message queue, and message consumption is carried out, so that important messages are ensured to reach users in time.
Drawings
FIG. 1 is a flow chart of a message pushing method based on double queues according to an embodiment of the present invention;
FIG. 2 is a flow chart of a message pushing method based on double queues according to another embodiment of the present invention;
FIG. 3 is a flow chart of a message pushing method based on double queues according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of a message pushing device based on dual queues according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a computer device according to an embodiment of the present invention;
fig. 6 is a schematic structural view of a computer storage medium according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," and the like in this disclosure are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", and "a third" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. All directional indications (such as up, down, left, right, front, back … …) in embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular gesture (as shown in the drawings), and if the particular gesture changes, the directional indication changes accordingly. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Fig. 1 is a flow chart of a message pushing method based on double queues according to an embodiment of the invention. It should be noted that, if there are substantially the same results, the method of the present invention is not limited to the flow sequence shown in fig. 1. As shown in fig. 1, the method comprises the steps of:
step S101: and acquiring the queue information in real time and storing the queue information in the first information queue in sequence.
After the message producer generates a message, the sending of the message is typically triggered by the business system or JOB in step S101. When the message is put in storage, the initiator of message sending sends the queue message, at this time, the system acquires the queue message in real time and stores the queue message in the first message queue in sequence.
In one possible embodiment, the message generated by the message producer may carry a template ID from which the message may be assembled prior to message warehousing. Specifically, before acquiring the queue message in real time and sequentially storing the queue message to the first message queue, the method further comprises: acquiring a message sent by a message producer and identifying a template ID of the message; and acquiring a corresponding message template according to the template ID, and reassembling the message according to the message template to obtain a queue message. The message template is, for example, "license plate number is XXXXXX, dangerous, please deal with in time", wherein the license plate number is replaceable message content. The queue message includes the content of the message such as urgency, message type, and number of message-accommodations that the message can accommodate. The degree of urgency may be a number, e.g. 1, 2, 3, the larger the number the higher the degree of urgency. The message types can include urgent type and non-urgent type, the urgent type is business message, the non-urgent type is like system message, notice type message, when the message blocks up, urgent type message need priority handling, the non-urgent type message can delay handling. The message may accommodate a maximum number of message stacks allowable for a message type in the second message queue.
Step S102: and accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches the flow control limit condition according to the first message quantity.
In step S102, the flow control constraint may be a maximum number of messages that the first message queue may accommodate. In one implementation embodiment, please refer to fig. 2, if the first message queue does not reach the flow control limit condition, step S106 is executed, and message consumption is performed in the first message queue according to a first-in first-out mechanism. For example, the flow control constraint is set to 100, if the number of the first messages exceeds 100, step S103 is performed, and if the number of the first messages does not exceed 100, the queue messages consume the messages in the first message queue according to a first-in-first-out mechanism.
Step S103: if yes, transferring the queue information exceeding the flow control limit condition from the first information queue to the second information queue.
In step S103, assuming that the flow control constraint is 100, after a new queue message enters the first message queue from the head of the queue, the first message number of the first message queue reaches 101, and a queue message at the end of the queue is transferred from the first message queue to the second message queue.
Step S104: and calculating the second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with the first blocking threshold value, and judging whether the second message quantity is larger than or equal to the first blocking threshold value.
In step S104, since the queue message carries the message type, the urgency, the number of message accommodating stacks, and the like when entering the first message queue, the message type of each queue message can be directly obtained in the second message queue. The first blocking threshold is the number of message accommodating stacks, which is one basis for judging the blocking degree of the message, for example, the emergency type message accommodating stacks is 100, and the second message number is compared with the first blocking threshold, so that whether the blocking degree of the message of the second message queue is slightly blocked can be judged. The present embodiment may calculate the congestion level of the current message queue by the JOB timing task for the number of message types of the second message queue over a period of time. For example, the number of message types flowing into the second message queue in one minute is calculated, if 100 urgent types of messages flowing into the second message queue in one minute exist, the message blocking degree of the second message queue is judged to be slightly blocked, and at this time, an identifier may be set in the Redis to be slightly blocked. If the emergency type information does not exceed 100 pieces in the second information queue within one minute, judging that the second information queue is not blocked.
In this embodiment, if the number of the second messages of a certain message type is greater than or equal to the first blocking threshold, which indicates that the message blocking degree of the second message queue is slightly blocked, step S105 is performed. In one implementation embodiment, referring to fig. 2, if the number of the second messages of a certain message type is smaller than the first blocking threshold, which indicates that no message blocking occurs in the second message queue, step S107 is performed, and message consumption is performed in the first message queue and the second message queue according to a first-in first-out mechanism, respectively. According to the embodiment, under the condition of non-message blocking, double-queue consumption can be flexibly carried out, and the message consumption rate is increased.
Step S105: if yes, triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue messages, transferring the queue messages to the second message queue according to the classification result, and consuming the messages in the second message queue.
In step S105, when the message reordering mechanism is that the first message queue transfers the queue message to the second message queue, the urgency of the queue message is identified, the queue message with the urgency greater than the first preset value is preferentially transferred to the second message queue, the queue message with the urgency less than or equal to the first preset value is marked, and the queue message is dropped back to the first message queue. For example, when the first preset value is 2 and the number of the second messages is greater than or equal to the first blocking threshold, the first message queue sends the queue messages with the urgency degree greater than 2 to the second message queue, discards the queue messages with the urgency degree less than 2 to the first message queue, and marks the queue messages as delayed sending. When the first message queue performs a message rearrangement mechanism, the first message queue does not perform message consumption, only transfers the queue message with higher urgency to the second message queue, but the second message queue performs message consumption, so that after message rearrangement for a plurality of times, the queue message with higher urgency can be preferentially consumed according to a first-in first-out consumption mechanism in the second message queue, the situation that important push messages are delayed to reach clients due to consumption blockage can be reduced to a certain extent, and user experience is improved.
According to the message pushing method based on the double queues, through the double message queue arrangement, when the second message quantity of the second message queue is larger than or equal to the first blocking threshold value, the first message queue can be triggered to execute a message rearrangement mechanism, the message consumption sequence is dynamically adjusted, the queue messages with larger urgency are transferred to the second message queue preferentially, message consumption is conducted, important messages are guaranteed to reach users timely, and user experience is improved.
Fig. 3 is a flow chart of a message pushing method based on double queues according to another embodiment of the present invention. It should be noted that, if there are substantially the same results, the method of the present invention is not limited to the flow sequence shown in fig. 3. As shown in fig. 3, the method comprises the steps of:
step S301: and acquiring the queue information in real time and storing the queue information in the first information queue in sequence.
In this embodiment, step S301 in fig. 3 is similar to step S101 in fig. 1, and is not described herein for brevity.
Step S302: and accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches the flow control limit condition according to the first message quantity.
In this embodiment, step S302 in fig. 3 is similar to step S102 in fig. 1, and is not described herein for brevity.
Step S303: if yes, transferring the queue information exceeding the flow control limit condition from the first information queue to the second information queue.
In this embodiment, step S303 in fig. 3 is similar to step S103 in fig. 1, and is not described herein for brevity.
Step S304: calculating a second message number of each message type in the second message queue at intervals of a preset duration, comparing the second message number with a second blocking threshold value, and judging that the second message number is larger than or equal to the second blocking threshold value.
In step S304, the second congestion threshold is a basis for determining the congestion level of the message, where the second congestion threshold is greater than the first congestion threshold, and the number of the second messages is compared with the second congestion threshold to determine whether the congestion level of the message in the second message queue is severely congested. The present embodiment may calculate the congestion level of the current message queue by the JOB timing task for the number of message types of the second message queue over a period of time. For example, the number of message types flowing into the second message queue in one minute is calculated, if 1000 urgent types of messages flowing into the second message queue in one minute exist, the message blocking degree of the second message queue is judged to be severely blocked, and at this time, an identifier may be set in the Redis to be severely blocked.
Step S305: if yes, triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue messages, transferring the queue messages to the second message queue according to the classification result, and triggering the first message queue and/or the second message queue to execute a message discarding strategy.
In step S305, if the message blocking degree of the second message queue is severe blocking, when the first message queue transfers the queue messages to the second message queue, the urgency of the queue messages is identified, so as to classify the priority of the queue messages, transfer the queue messages with higher urgency to the second message queue preferentially, mark the queue messages with lower urgency, discard the queue messages back to the first message queue, identify the message type of the queue messages in the first message queue and/or the second message queue, notify the non-urgent type messages such as system messages and notify the like messages, and discard the non-urgent type messages, thereby effectively reducing the message blocking, improving the consumption efficiency of the queue messages with higher urgency, effectively reducing the phenomenon that important type push messages delay to reach clients due to consumption blocking, and avoiding affecting the user experience.
In one implementation embodiment, when the messages are put in storage, the queue messages are classified and a discarding strategy is executed, so that the type of the queue messages entering the first consumption queue is controlled from the source, and the consumption efficiency of the queue messages with higher urgency is further improved. Specifically, after the queue message is acquired in real time, before the sequential queue message is stored in the first message queue, the method further comprises: and acquiring the message type of the queue message, and executing a message discarding strategy on the queue message with the message type of non-urgent type.
According to the message pushing method based on the double queues, the serious blocking degree is identified, the first message queue is triggered to execute the message rearrangement mechanism to classify the priority of the queue messages, the queue messages are transferred to the second message queue according to the classification result, and meanwhile, the first message queue and/or the second message queue is triggered to execute the message discarding strategy, so that the message blocking is effectively reduced, the consumption efficiency of the queue messages with higher urgency is improved, the phenomenon that important types of pushed messages delay touching to clients due to consumption blocking is effectively reduced, and the influence on user experience is avoided.
Fig. 4 is a schematic structural diagram of a message pushing device based on dual queues according to an embodiment of the present invention. As shown in fig. 4, the apparatus 40 includes an acquisition module 41, a first determination module 42, a first execution module 43, a second determination module 44, and a second execution module 45.
The acquiring module 41 is configured to acquire the queue message in real time and store the queue message in the first message queue in sequence;
the first judging module 42 is configured to accumulate the first message number in the first message queue, and judge whether the first message queue reaches the flow control limit condition according to the first message number;
the first execution module 43 is configured to transfer the queue message exceeding the flow control limit condition from the first message queue to the second message queue if yes;
the second judging module 44 is configured to calculate a second message number of each message type in the second message queue at intervals of a preset duration, compare the second message number with the first blocking threshold, and judge whether the second message number is greater than or equal to the first blocking threshold;
the second execution module 45 is configured to trigger the first message queue to execute the message reordering mechanism to classify the priority of the queue message if yes, transfer the queue message to the second message queue according to the classification result, and consume the message in the second message queue.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the invention. As shown in fig. 5, the computer device 50 includes a processor 51 and a memory 52 coupled to the processor 51.
The memory 52 stores program instructions for implementing the dual queue based message pushing method described in any of the embodiments above.
The message pushing method based on the double queues comprises the following steps:
acquiring queue information in real time and storing the queue information in a first information queue in sequence;
accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches the flow control limit condition according to the first message quantity;
if yes, transferring the queue information exceeding the flow control limit condition from the first information queue to the second information queue;
calculating a second message number of each message type in a second message queue at intervals of a preset duration, comparing the second message number with a first blocking threshold value, and judging whether the second message number is larger than or equal to the first blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue message, transferring the queue message to the second message queue according to the classification result, and consuming the message in the second message queue.
The processor 51 is configured to execute program instructions stored in the memory 52 to push messages.
The processor 51 may also be referred to as a CPU (Central Processing Unit ). The processor 51 may be an integrated circuit chip with signal processing capabilities. Processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a computer storage medium according to an embodiment of the present invention. The computer storage medium according to the embodiment of the present invention stores a program file 61 capable of implementing all the methods described above, where the program file 61 may be stored in the form of a software product in the computer storage medium, and includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present invention.
The message pushing method based on the double queues comprises the following steps:
acquiring queue information in real time and storing the queue information in a first information queue in sequence;
accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches the flow control limit condition according to the first message quantity;
if yes, transferring the queue information exceeding the flow control limit condition from the first information queue to the second information queue;
calculating a second message number of each message type in a second message queue at intervals of a preset duration, comparing the second message number with a first blocking threshold value, and judging whether the second message number is larger than or equal to the first blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue message, transferring the queue message to the second message queue according to the classification result, and consuming the message in the second message queue.
And the aforementioned computer storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is only the embodiments of the present invention, and therefore, the patent scope of the invention is not limited thereto, and all equivalent structures or equivalent processes using the descriptions of the present invention and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the invention.

Claims (8)

1. The message pushing method based on the double queues is characterized by comprising the following steps of:
acquiring queue information in real time and storing the queue information in a first information queue in sequence;
accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches a flow control limit condition according to the first message quantity;
if yes, transferring the queue message exceeding the flow control limit condition from the first message queue to a second message queue;
calculating a second message number of each message type in the second message queue at intervals of preset duration, comparing the second message number with a first blocking threshold value, and judging whether the second message number is larger than or equal to the first blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue message, transferring the queue message to the second message queue according to the classification result and consuming the message in the second message queue;
after calculating the second message quantity of each message type in the second message queue, the interval preset duration further includes:
comparing the second message number with a second blocking threshold value, and judging that the second message number is larger than or equal to the second blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue message, transferring the queue message to the second message queue according to the classification result, and triggering the first message queue and/or the second message queue to execute a message discarding strategy;
when the message rearrangement mechanism transfers the queue messages from the first message queue to the second message queue, the urgency of the queue messages is identified, the queue messages with the urgency greater than a first preset value are preferentially transferred to the second message queue, and the queue messages with the urgency less than or equal to the first preset value are marked and are returned to the first message queue.
2. The message pushing method according to claim 1, wherein after determining whether the first message queue reaches the flow control limit condition according to the first message number, further comprising:
and if the first message queue does not reach the flow control limiting condition, message consumption is carried out in the first message queue according to a first-in first-out mechanism.
3. The message pushing method according to claim 1, wherein after comparing the second message number with a first blocking threshold, determining whether the second message number is greater than or equal to the first blocking threshold, further comprises:
if not, message consumption is carried out in the first message queue and the second message queue according to a first-in first-out mechanism respectively.
4. The message pushing method according to claim 1, wherein after the obtaining the queue message in real time, before the sequentially depositing the queue message in the first message queue, further comprises:
and acquiring the message type of the queue message, and executing a message discarding strategy on the queue message with the message type of non-urgent type.
5. The message pushing method according to claim 1, wherein before the obtaining the queue message in real time and depositing the queue message in the first message queue in sequence, the method further comprises:
acquiring a message sent by a message producer and identifying a template ID of the message;
and acquiring a corresponding message template according to the template ID, and reassembling the message according to the message template to obtain the queue message, wherein the queue message comprises the urgency, the message type and the message accommodating and stacking quantity.
6. A message pushing device based on double queues, comprising:
the acquisition module is used for acquiring the queue information in real time and storing the queue information into a first information queue in sequence;
the first judging module is used for accumulating the first message quantity in the first message queue and judging whether the first message queue reaches the flow control limiting condition according to the first message quantity;
the first execution module is used for transferring the queue message exceeding the flow control limit condition from the first message queue to a second message queue if the flow control limit condition is met;
the second judging module is used for calculating the second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with a first blocking threshold value and judging whether the second message quantity is larger than or equal to the first blocking threshold value;
the second execution module is used for triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue message, transferring the queue message to the second message queue according to the classification result, and consuming the message in the second message queue;
after calculating the second message quantity of each message type in the second message queue, the interval preset duration further includes:
comparing the second message number with a second blocking threshold value, and judging that the second message number is larger than or equal to the second blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism to classify the priority of the queue message, transferring the queue message to the second message queue according to the classification result, and triggering the first message queue and/or the second message queue to execute a message discarding strategy;
when the message rearrangement mechanism transfers the queue messages from the first message queue to the second message queue, the urgency of the queue messages is identified, the queue messages with the urgency greater than a first preset value are preferentially transferred to the second message queue, and the queue messages with the urgency less than or equal to the first preset value are marked and are returned to the first message queue.
7. A computer device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the dual queue based message pushing method according to any one of claims 1-5 when executing the computer program.
8. A computer storage medium having stored thereon a computer program which, when executed by a processor, implements a dual queue based message pushing method according to any one of claims 1-5.
CN202210708982.4A 2022-06-22 2022-06-22 Message pushing method, device, equipment and storage medium based on double queues Active CN114885018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210708982.4A CN114885018B (en) 2022-06-22 2022-06-22 Message pushing method, device, equipment and storage medium based on double queues

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210708982.4A CN114885018B (en) 2022-06-22 2022-06-22 Message pushing method, device, equipment and storage medium based on double queues

Publications (2)

Publication Number Publication Date
CN114885018A CN114885018A (en) 2022-08-09
CN114885018B true CN114885018B (en) 2023-08-29

Family

ID=82681722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210708982.4A Active CN114885018B (en) 2022-06-22 2022-06-22 Message pushing method, device, equipment and storage medium based on double queues

Country Status (1)

Country Link
CN (1) CN114885018B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115865990B (en) * 2023-02-22 2023-04-18 广州机智云物联网科技有限公司 High-performance Internet of things platform message engine

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7895273B1 (en) * 2003-01-23 2011-02-22 Sprint Spectrum L.P. System and method for sorting instant messages
CN106708607A (en) * 2015-11-12 2017-05-24 阿里巴巴集团控股有限公司 Congestion control method and apparatus for message queue
CN110808922A (en) * 2019-10-29 2020-02-18 北京大米科技有限公司 Message processing method and device, storage medium and electronic equipment
CN111131082A (en) * 2019-12-25 2020-05-08 广东电科院能源技术有限责任公司 Charging facility data transmission dynamic control method and system
CN113467969A (en) * 2021-06-22 2021-10-01 上海星融汽车科技有限公司 Method for processing message accumulation
CN114428693A (en) * 2022-03-31 2022-05-03 季华实验室 Method and device for adjusting message priority, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7895273B1 (en) * 2003-01-23 2011-02-22 Sprint Spectrum L.P. System and method for sorting instant messages
CN106708607A (en) * 2015-11-12 2017-05-24 阿里巴巴集团控股有限公司 Congestion control method and apparatus for message queue
CN110808922A (en) * 2019-10-29 2020-02-18 北京大米科技有限公司 Message processing method and device, storage medium and electronic equipment
CN111131082A (en) * 2019-12-25 2020-05-08 广东电科院能源技术有限责任公司 Charging facility data transmission dynamic control method and system
CN113467969A (en) * 2021-06-22 2021-10-01 上海星融汽车科技有限公司 Method for processing message accumulation
CN114428693A (en) * 2022-03-31 2022-05-03 季华实验室 Method and device for adjusting message priority, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114885018A (en) 2022-08-09

Similar Documents

Publication Publication Date Title
TWI510030B (en) System and method for performing packet queuing on a client device using packet service classifications
CA2557461C (en) Dual use counters for routing loops and spam detection
EP4175232A1 (en) Congestion control method and device
WO2019174536A1 (en) Congestion control method and network device
US8004976B2 (en) Monitoring, controlling, and preventing traffic congestion between processors
US20020101837A1 (en) Method and apparatus for efficient use of communication resources in a data communication system under overload conditions
CN108400927B (en) Message pushing method and device for high-concurrency messages
CN110784415B (en) ECN quick response method and device
WO2012149499A2 (en) Effective circuits in packet-switched networks
CN114885018B (en) Message pushing method, device, equipment and storage medium based on double queues
CN111324886A (en) Service request processing method and device and server
CN109660468A (en) A kind of port congestion management method, device and equipment
CN108512727A (en) A kind of determination method and device of central processing unit utilization rate
CN111949497A (en) Message queue system and message processing method based on message queue system
RU2641250C2 (en) Device and method of queue management
CN106792905B (en) Message processing method and base station
WO2021259321A1 (en) Storage scheduling method, device, and storage medium
CN117097679A (en) Aggregation method and device for network interruption and network communication equipment
TW201014295A (en) Controlling data flow through a data communications link
CN112351049B (en) Data transmission method, device, equipment and storage medium
US20210135999A1 (en) Packet Control Method, Flow Table Update Method, and Node Device
Rizzo et al. A study of speed mismatches between communicating virtual machines
WO2020248857A1 (en) Data congestion control and bandwidth prediction method
CN109347760B (en) Data sending method and device
CN112988417A (en) Message processing method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant