CN114885018A - Message pushing method, device, equipment and storage medium based on double queues - Google Patents

Message pushing method, device, equipment and storage medium based on double queues Download PDF

Info

Publication number
CN114885018A
CN114885018A CN202210708982.4A CN202210708982A CN114885018A CN 114885018 A CN114885018 A CN 114885018A CN 202210708982 A CN202210708982 A CN 202210708982A CN 114885018 A CN114885018 A CN 114885018A
Authority
CN
China
Prior art keywords
message
queue
messages
message queue
flow control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210708982.4A
Other languages
Chinese (zh)
Other versions
CN114885018B (en
Inventor
庄志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202210708982.4A priority Critical patent/CN114885018B/en
Publication of CN114885018A publication Critical patent/CN114885018A/en
Application granted granted Critical
Publication of CN114885018B publication Critical patent/CN114885018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6235Variable service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention belongs to the technical field of computers, and discloses a message pushing method, a message pushing device, message pushing equipment and a message pushing storage medium based on double queues. The method comprises the following steps: acquiring queue messages in real time and sequentially putting the queue messages into a first message queue; accumulating the number of first messages in the first message queue, and judging whether a flow control limiting condition is reached according to the number of the first messages; if yes, transferring the queue message exceeding the flow control limiting condition from the first message queue to the second message queue; calculating the second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with the first blocking threshold value, and judging whether the second message quantity is greater than or equal to the first blocking threshold value; if yes, triggering the first message queue to execute a message rearrangement mechanism so as to carry out priority classification on the queue messages, and transferring the queue messages to the second message queue according to the classification result and carrying out message consumption. By the mode, the message blocking can be effectively reduced.

Description

Message pushing method, device, equipment and storage medium based on double queues
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for pushing a message based on a dual queue.
Background
The message pushing usually adopts an asynchronous pushing scheme, and the asynchronous pushing can effectively solve the parallel asynchronous sending of the pushing without influencing the business logic and hindering the business process. The scheme of asynchronous message pushing generally uses a message queue to carry out asynchronous pushing, namely, a service system carries out logic triggering of messages, when the messages are recorded in a table, a producer sends a queue message to the message queue, a consumer in a message consumption center carries out consumption according to the sequence of the messages in the queue, and a third-party system is called to carry out actual message pushing. However, if the message is pushed in a large amount, or the message is consumed slowly, or the message is blocked for a while, the important message is consumed untimely, the message is delayed, and the risk that the service of the user is affected because the important message is touched slowly exists.
Disclosure of Invention
The invention provides a message pushing method, a message pushing device, message pushing equipment and a message pushing storage medium based on double queues, which can effectively reduce message blocking, dynamically adjust message consumption sequence and ensure that important messages reach users in time.
In order to solve the technical problems, the invention adopts a technical scheme that: a message pushing method based on double queues is provided, which comprises the following steps:
acquiring queue messages in real time and storing the queue messages to a first message queue in sequence;
accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches a flow control limiting condition or not according to the first message quantity;
if yes, transferring the queue message exceeding the flow control limiting condition from the first message queue to a second message queue;
calculating a second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with a first blocking threshold value, and judging whether the second message quantity is greater than or equal to the first blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism so as to carry out priority classification on the queue messages, transferring the queue messages to the second message queue according to a classification result, and carrying out message consumption in the second message queue.
According to an embodiment of the present invention, when the message reordering mechanism transfers the queue message from the first message queue to the second message queue, the urgency level of the queue message is identified, the queue message with the urgency level greater than a first preset value is preferentially transferred to the second message queue, and the queue message with the urgency level less than or equal to the first preset value is marked and is discarded into the first message queue.
According to an embodiment of the present invention, after determining whether the first message queue reaches a flow control limit condition according to the first message quantity, the method further includes:
and if the first message queue does not reach the flow control limiting condition, message consumption is carried out in the first message queue according to a first-in first-out mechanism.
According to an embodiment of the present invention, after comparing the second message number with a first blocking threshold and determining whether the second message number is greater than or equal to the first blocking threshold, the method further includes:
and if not, respectively consuming the messages in the first message queue and the second message queue according to a first-in first-out mechanism.
According to an embodiment of the present invention, after the calculating, at a preset time interval, a second message quantity of each message type in the second message queue, the method further includes:
comparing the second message quantity with a second blocking threshold value, and judging that the second message quantity is greater than or equal to the second blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism so as to carry out priority classification on the queue messages, transferring the queue messages to the second message queue according to a classification result, and simultaneously triggering the first message queue and/or the second message queue to execute a message discarding strategy.
According to an embodiment of the present invention, after the obtaining the queue message in real time and before the sequentially storing the queue message to the first message queue, the method further includes:
and acquiring the message type of the queue message, and executing a message discarding strategy on the queue message of which the message type is a non-emergency type.
According to an embodiment of the present invention, before the acquiring queue messages in real time and sequentially storing the queue messages in the first message queue, the method further includes:
acquiring a message sent by a message producer, and identifying a template ID of the message;
and acquiring a corresponding message template according to the template ID, and reassembling the message according to the message template to acquire the queue message, wherein the queue message comprises the urgency degree, the message type and the message containable accumulation number.
In order to solve the technical problem, the invention adopts another technical scheme that: the message pushing device based on the double queues comprises:
the acquisition module is used for acquiring queue messages in real time and storing the queue messages to a first message queue in sequence;
the first judging module is used for accumulating the first message quantity in the first message queue and judging whether the first message queue reaches a flow control limiting condition or not according to the first message quantity;
a first execution module, configured to transfer the queue message that exceeds the flow control restriction condition from the first message queue to a second message queue if the flow control restriction condition is exceeded;
the second judging module is used for calculating the second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with a first blocking threshold value and judging whether the second message quantity is greater than or equal to the first blocking threshold value or not;
and the second execution module is used for triggering the first message queue to execute a message rearrangement mechanism so as to carry out priority classification on the queue messages, transferring the queue messages to the second message queue according to a classification result, and carrying out message consumption in the second message queue if the queue messages are in the first message queue.
In order to solve the technical problems, the invention adopts another technical scheme that: there is provided a computer device comprising: the message pushing system comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the message pushing method based on the double queues when executing the computer program.
In order to solve the technical problems, the invention adopts another technical scheme that: there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the dual queue-based message push method described above.
The invention has the beneficial effects that: through the arrangement of the double message queues, when the number of second messages in the second message queue is greater than or equal to the first blocking threshold value, the first message queue can be triggered to execute a message rearrangement mechanism, the message consumption sequence is dynamically adjusted, queue messages with high urgency degree are preferentially transferred to the second message queue for message consumption, and important messages are guaranteed to reach users in time.
Drawings
Fig. 1 is a schematic flowchart of a message pushing method based on dual queues according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a message pushing method based on dual queues according to another embodiment of the present invention;
fig. 3 is a flowchart illustrating a message pushing method based on dual queues according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of a message pushing apparatus based on dual queues according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer storage medium according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first", "second" and "third" in the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise. All directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present invention are only used to explain the relative positional relationship between the components, the movement, and the like in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Fig. 1 is a flowchart illustrating a message pushing method based on dual queues according to an embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 1 if the results are substantially the same. As shown in fig. 1, the method comprises the steps of:
step S101: and acquiring queue messages in real time and storing the queue messages to the first message queue in sequence.
In step S101, after the message producer generates the message, the sending of the message is typically triggered by the business system or JOB. When the message is put in storage, the initiator of the message transmission transmits the queue message, and at the moment, the system acquires the queue message in real time and sequentially stores the queue message to the first message queue.
In an implementation, the message generated by the message producer may carry a template ID, and the message may be assembled according to the template ID before being put into storage. Specifically, the method for acquiring the queue message in real time and sequentially storing the queue message to the front of the first message queue further includes: acquiring a message sent by a message producer, and identifying a template ID of the message; and acquiring a corresponding message template according to the template ID, and reassembling the message according to the message template to acquire the queue message. The message template is, for example, "license plate number is XXXX to go out of insurance, please deal with in time", wherein the license plate number is replaceable message content. The queue message includes message content such as urgency, message type, and number of message containable piles. The urgency level may be a number, e.g. 1, 2, 3, the larger the number, the higher the urgency level. The message types can include an emergency type and a non-emergency type, the emergency type is a service message, the non-emergency type is a system message and a notification message, when message congestion occurs, the emergency type message needs to be processed preferentially, and the non-emergency type message can be processed in a delayed mode. The message capacity stack amount is the maximum number of message stacks allowable in the second message queue for a certain message type.
Step S102: and accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches the flow control limiting condition or not according to the first message quantity.
In step S102, the flow control restriction condition may be a maximum number of messages that can be accommodated by the first message queue. In an implementation example, referring to fig. 2, if the first message queue does not reach the flow control restriction condition, step S106 is executed to consume the messages in the first message queue according to a first-in first-out mechanism. For example, the flow control limiting condition is set to 100, if the first message quantity exceeds 100, step S103 is executed, and if the first message quantity does not exceed 100, the queue message consumes the message in the first message queue according to a first-in first-out mechanism.
Step S103: and if so, transferring the queue message exceeding the flow control limit condition from the first message queue to the second message queue.
In step S103, assuming that the flow control restriction condition is 100, after a new queue message enters the first message queue from the head of the queue, the first message quantity of the first message queue reaches 101, and a queue message queued at the tail of the queue is transferred from the first message queue to the second message queue.
Step S104: and calculating the second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with the first blocking threshold value, and judging whether the second message quantity is greater than or equal to the first blocking threshold value.
In step S104, since the queue message carries the attributes such as the message type, the urgency, the message containable accumulation number, and the like when entering the first message queue, the message type of each queue message can be directly obtained in the second message queue. The first blocking threshold is the message containable accumulation amount, which is a basis for determining the message blocking degree, for example, the message containable accumulation amount of the urgent type is 100, and the message blocking degree of the second message queue can be determined whether it is slightly blocked by comparing the second message amount with the first blocking threshold. The present embodiment can calculate the congestion degree of the current message queue by the JOB timing task for the number of each message type of the second message queue in a period of time. For example, the number of message types flowing into the second message queue in one minute is calculated, and if there are 100 urgent messages flowing into the second message queue in one minute, the message congestion degree of the second message queue is determined to be light congestion, and at this time, a flag may be set in Redis as light congestion. And if the emergency type messages flow into the second message queue within one minute and the number of the emergency type messages does not exceed 100, judging that the second message queue is not blocked by the messages.
In this embodiment, if the second message quantity of a certain message type is greater than or equal to the first blocking threshold, which indicates that the message blocking degree of the second message queue is light blocking, step S105 is executed. In an implementation example, referring to fig. 2, if the number of the second messages of a certain message type is smaller than the first blocking threshold, which indicates that no message blocking occurs in the second message queue, step S107 is executed to perform message consumption in the first message queue and the second message queue according to the first-in first-out mechanism, respectively. The embodiment can flexibly perform double-queue consumption and increase the message consumption rate under the condition of non-message blocking.
Step S105: if yes, triggering the first message queue to execute a message rearrangement mechanism so as to carry out priority classification on the queue messages, transferring the queue messages to the second message queue according to the classification result, and consuming the messages in the second message queue.
In step S105, when the message rearrangement mechanism transfers the queue message from the first message queue to the second message queue, the urgency level of the queue message is identified, the queue message whose urgency level is greater than the first preset value is preferentially transferred to the second message queue, and the queue message whose urgency level is less than or equal to the first preset value is marked and is discarded into the first message queue. For example, the first preset value is 2, when the number of the second messages is greater than or equal to the first blocking threshold, the first message queue sends the queue messages with the urgency degree greater than 2 to the second message queue, the queue messages with the urgency degree less than 2 are lost to the first message queue, and the queue messages are marked as delayed sending. When the first message queue carries out a message rearrangement mechanism, the first message queue does not consume messages, only queue messages with higher urgency degree are transferred to the second message queue, but the second message queue carries out message consumption, so that after a plurality of times of message rearrangement, queue messages with higher urgency degree can be consumed preferentially according to a first-in first-out consumption mechanism in the second message queue, the condition that important types of push messages are delayed to reach clients due to consumption blockage can be reduced to a certain extent, and user experience is improved.
According to the message pushing method based on the double queues, provided by the embodiment of the invention, through the setting of the double message queues, when the number of the second messages in the second message queue is greater than or equal to the first blocking threshold value, the first message queue can be triggered to execute a message rearrangement mechanism, the message consumption sequence is dynamically adjusted, the queue messages with higher urgency degree are preferentially transferred to the second message queue for message consumption, important messages are ensured to reach a user in time, and the user experience is improved.
Fig. 3 is a flowchart illustrating a message pushing method based on dual queues according to another embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 3 if the results are substantially the same. As shown in fig. 3, the method comprises the steps of:
step S301: and acquiring queue messages in real time and storing the queue messages to the first message queue in sequence.
In this embodiment, step S301 in fig. 3 is similar to step S101 in fig. 1, and for brevity, is not described herein again.
Step S302: and accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches the flow control limiting condition or not according to the first message quantity.
In this embodiment, step S302 in fig. 3 is similar to step S102 in fig. 1, and for brevity, is not described herein again.
Step S303: and if so, transferring the queue message exceeding the flow control limit condition from the first message queue to the second message queue.
In this embodiment, step S303 in fig. 3 is similar to step S103 in fig. 1, and for brevity, is not described herein again.
Step S304: and calculating the second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with a second blocking threshold value, and judging that the second message quantity is greater than or equal to the second blocking threshold value.
In step S304, the second blocking threshold is a basis for determining the message blocking degree, and the second blocking threshold is greater than the first blocking threshold, and the message blocking degree of the second message queue can be determined whether to be severely blocked by comparing the second message quantity with the second blocking threshold. The present embodiment can calculate the congestion degree of the current message queue by the JOB timing task for the number of each message type of the second message queue in a period of time. For example, the number of each message type flowing into the second message queue in one minute is calculated, and if there are 1000 messages of the urgent type flowing into the second message queue in one minute, the message congestion degree of the second message queue is determined to be serious congestion, and at this time, a flag may be set in Redis as serious congestion.
Step S305: if yes, triggering the first message queue to execute a message rearrangement mechanism so as to carry out priority classification on the queue messages, transferring the queue messages to the second message queue according to the classification result, and simultaneously triggering the first message queue and/or the second message queue to execute a message discarding strategy.
In step S305, if the message congestion degree of the second message queue is severe congestion, when the first message queue transfers the queue message to the second message queue, the urgency degree of the queue message is identified, to classify the queue messages according to priority, to preferentially transfer the queue messages with higher urgency to the second message queue, to mark the queue messages with lower urgency and to drop the queue messages into the first message queue, and at the same time, identifying the message type of the queue message in the first message queue and/or the second message queue, and processing the non-urgent message, such as system message, notification message, and discarding, therefore, message blocking is effectively reduced, consumption efficiency of queue messages with higher urgency is improved, the phenomenon that important types of push messages delay and reach clients due to consumption blocking is effectively reduced, and influence on user experience is avoided.
In an implementation embodiment, when the messages are put into the storage, the queue messages can be classified and a discarding strategy is executed, so that the queue message type entering the first consumption queue is controlled from the source, and the consumption efficiency of the queue messages with higher urgency is further improved. Specifically, after acquiring the queue message in real time, and before storing the queue message in sequence to the first message queue, the method further includes: and acquiring the message type of the queue message, and executing a message discarding strategy on the queue message of which the message type is a non-emergency type.
According to the message pushing method based on the double queues, the serious blocking degree is identified, the first message queue is triggered to execute the message rearrangement mechanism so as to classify the queue messages in priority, the queue messages are transferred to the second message queue according to the classification result, and the first message queue and/or the second message queue is triggered to execute the message discarding strategy, so that the message blocking is effectively reduced, the consumption efficiency of the queue messages with higher urgency is improved, the phenomenon that important types of pushed messages are delayed to reach clients due to consumption blocking is effectively reduced, and the influence on user experience is avoided.
Fig. 4 is a schematic structural diagram of a message pushing apparatus based on dual queues according to an embodiment of the present invention. As shown in fig. 4, the apparatus 40 includes an obtaining module 41, a first determining module 42, a first executing module 43, a second determining module 44, and a second executing module 45.
The obtaining module 41 is configured to obtain queue messages in real time and store the queue messages to the first message queue in sequence;
the first judging module 42 is configured to accumulate the first message quantity in the first message queue, and judge whether the first message queue reaches the flow control restriction condition according to the first message quantity;
the first executing module 43 is configured to transfer, if yes, a queue message that exceeds the flow control limitation condition from the first message queue to the second message queue;
the second judging module 44 is configured to calculate a second message quantity of each message type in the second message queue at intervals of a preset duration, compare the second message quantity with the first blocking threshold, and judge whether the second message quantity is greater than or equal to the first blocking threshold;
the second executing module 45 is configured to, if yes, trigger the first message queue to execute a message reordering mechanism to perform priority classification on the queue messages, transfer the queue messages to the second message queue according to the classification result, and perform message consumption in the second message queue.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention. As shown in fig. 5, the computer device 50 includes a processor 51 and a memory 52 coupled to the processor 51.
The memory 52 stores program instructions for implementing the dual queue-based message pushing method according to any of the above embodiments.
The message pushing method based on the double queues comprises the following steps:
acquiring queue messages in real time and storing the queue messages to a first message queue in sequence;
accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches a flow control limiting condition or not according to the first message quantity;
if yes, transferring the queue message exceeding the flow control limiting condition from the first message queue to a second message queue;
calculating the second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with the first blocking threshold value, and judging whether the second message quantity is greater than or equal to the first blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism so as to carry out priority classification on the queue messages, transferring the queue messages to the second message queue according to the classification result and carrying out message consumption in the second message queue.
The processor 51 is operable to execute program instructions stored by the memory 52 to push messages.
The processor 51 may also be referred to as a CPU (Central Processing Unit). The processor 51 may be an integrated circuit chip having signal processing capabilities. The processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a computer storage medium according to an embodiment of the present invention. The computer storage medium of the embodiment of the present invention stores a program file 61 capable of implementing all the methods described above, wherein the program file 61 may be stored in the computer storage medium in the form of a software product, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present invention.
The message pushing method based on the double queues comprises the following steps:
acquiring queue messages in real time and storing the queue messages to a first message queue in sequence;
accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches a flow control limiting condition or not according to the first message quantity;
if yes, transferring the queue message exceeding the flow control limiting condition from the first message queue to the second message queue;
calculating the second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with the first blocking threshold value, and judging whether the second message quantity is greater than or equal to the first blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism so as to carry out priority classification on the queue messages, transferring the queue messages to the second message queue according to the classification result and carrying out message consumption in the second message queue.
And the aforementioned computer storage media include: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A message pushing method based on double queues is characterized by comprising the following steps:
acquiring queue messages in real time and storing the queue messages to a first message queue in sequence;
accumulating the first message quantity in the first message queue, and judging whether the first message queue reaches a flow control limiting condition or not according to the first message quantity;
if yes, transferring the queue message exceeding the flow control limiting condition from the first message queue to a second message queue;
calculating a second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with a first blocking threshold value, and judging whether the second message quantity is greater than or equal to the first blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism so as to carry out priority classification on the queue messages, transferring the queue messages to the second message queue according to a classification result, and consuming the messages in the second message queue.
2. The message pushing method according to claim 1, wherein the message reordering mechanism identifies urgency of the queue message when the first message queue transfers the queue message to the second message queue, preferentially transfers the queue message with the urgency greater than a first preset value to the second message queue, and marks the queue message with the urgency less than or equal to the first preset value and discards the queue message in the first message queue.
3. The message pushing method according to claim 1, wherein after determining whether the first message queue reaches a flow control restriction condition according to the first message quantity, the method further comprises:
and if the first message queue does not reach the flow control limiting condition, message consumption is carried out in the first message queue according to a first-in first-out mechanism.
4. The message pushing method according to claim 1, wherein after comparing the second message number with a first blocking threshold and determining whether the second message number is greater than or equal to the first blocking threshold, further comprising:
and if not, respectively consuming the messages in the first message queue and the second message queue according to a first-in first-out mechanism.
5. The message pushing method according to claim 1, after the calculating, by the preset interval duration, the second message number of each message type in the second message queue, further comprising:
comparing the second message quantity with a second blocking threshold value, and judging that the second message quantity is greater than or equal to the second blocking threshold value;
if yes, triggering the first message queue to execute a message rearrangement mechanism so as to carry out priority classification on the queue messages, transferring the queue messages to the second message queue according to a classification result, and simultaneously triggering the first message queue and/or the second message queue to execute a message discarding strategy.
6. The message pushing method according to claim 5, wherein after the real-time obtaining of the queue message and before the sequentially storing of the queue message to the first message queue, further comprising:
and acquiring the message type of the queue message, and executing a message discarding strategy on the queue message of which the message type is a non-emergency type.
7. The message pushing method according to claim 1, wherein the obtaining queue messages in real time and storing the queue messages in sequence before the first message queue further comprises:
acquiring a message sent by a message producer, and identifying a template ID of the message;
and acquiring a corresponding message template according to the template ID, and reassembling the message according to the message template to acquire the queue message, wherein the queue message comprises the urgency, the message type and the message containable accumulation number.
8. A message push apparatus based on dual queues, comprising:
the acquisition module is used for acquiring queue messages in real time and storing the queue messages to a first message queue in sequence;
the first judging module is used for accumulating the first message quantity in the first message queue and judging whether the first message queue reaches a flow control limiting condition or not according to the first message quantity;
a first execution module, configured to transfer the queue message that exceeds the flow control restriction condition from the first message queue to a second message queue if the flow control restriction condition is exceeded;
the second judging module is used for calculating the second message quantity of each message type in the second message queue at intervals of preset duration, comparing the second message quantity with a first blocking threshold value and judging whether the second message quantity is greater than or equal to the first blocking threshold value or not;
and if yes, triggering the first message queue to execute a message rearrangement mechanism so as to perform priority classification on the queue messages, transferring the queue messages to the second message queue according to a classification result, and performing message consumption in the second message queue.
9. A computer device, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the dual queue based message push method according to any of claims 1-7 when executing the computer program.
10. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the dual queue based message push method according to any of claims 1-7.
CN202210708982.4A 2022-06-22 2022-06-22 Message pushing method, device, equipment and storage medium based on double queues Active CN114885018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210708982.4A CN114885018B (en) 2022-06-22 2022-06-22 Message pushing method, device, equipment and storage medium based on double queues

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210708982.4A CN114885018B (en) 2022-06-22 2022-06-22 Message pushing method, device, equipment and storage medium based on double queues

Publications (2)

Publication Number Publication Date
CN114885018A true CN114885018A (en) 2022-08-09
CN114885018B CN114885018B (en) 2023-08-29

Family

ID=82681722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210708982.4A Active CN114885018B (en) 2022-06-22 2022-06-22 Message pushing method, device, equipment and storage medium based on double queues

Country Status (1)

Country Link
CN (1) CN114885018B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115865990A (en) * 2023-02-22 2023-03-28 广州机智云物联网科技有限公司 High-performance Internet of things platform message engine

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7895273B1 (en) * 2003-01-23 2011-02-22 Sprint Spectrum L.P. System and method for sorting instant messages
CN106708607A (en) * 2015-11-12 2017-05-24 阿里巴巴集团控股有限公司 Congestion control method and apparatus for message queue
CN110808922A (en) * 2019-10-29 2020-02-18 北京大米科技有限公司 Message processing method and device, storage medium and electronic equipment
CN111131082A (en) * 2019-12-25 2020-05-08 广东电科院能源技术有限责任公司 Charging facility data transmission dynamic control method and system
CN113467969A (en) * 2021-06-22 2021-10-01 上海星融汽车科技有限公司 Method for processing message accumulation
CN114428693A (en) * 2022-03-31 2022-05-03 季华实验室 Method and device for adjusting message priority, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7895273B1 (en) * 2003-01-23 2011-02-22 Sprint Spectrum L.P. System and method for sorting instant messages
CN106708607A (en) * 2015-11-12 2017-05-24 阿里巴巴集团控股有限公司 Congestion control method and apparatus for message queue
CN110808922A (en) * 2019-10-29 2020-02-18 北京大米科技有限公司 Message processing method and device, storage medium and electronic equipment
CN111131082A (en) * 2019-12-25 2020-05-08 广东电科院能源技术有限责任公司 Charging facility data transmission dynamic control method and system
CN113467969A (en) * 2021-06-22 2021-10-01 上海星融汽车科技有限公司 Method for processing message accumulation
CN114428693A (en) * 2022-03-31 2022-05-03 季华实验室 Method and device for adjusting message priority, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115865990A (en) * 2023-02-22 2023-03-28 广州机智云物联网科技有限公司 High-performance Internet of things platform message engine
CN115865990B (en) * 2023-02-22 2023-04-18 广州机智云物联网科技有限公司 High-performance Internet of things platform message engine

Also Published As

Publication number Publication date
CN114885018B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
WO2019174536A1 (en) Congestion control method and network device
TWI510030B (en) System and method for performing packet queuing on a client device using packet service classifications
EP4175232A1 (en) Congestion control method and device
CN108243116B (en) Flow control method and switching equipment
CN110784415B (en) ECN quick response method and device
WO2017016505A1 (en) Data enqueuing and dequeuing method and queue management unit
US20080225705A1 (en) Monitoring, Controlling, And Preventing Traffic Congestion Between Processors
CN105027081B (en) A kind of switching method and device of poll and interruption
CN108540395B (en) Congestion judgment method in loss-free network and switch
WO2020134425A1 (en) Data processing method, apparatus, and device, and storage medium
CN106851015B (en) Method, device and terminal for adjusting broadcast message queue
EP3823228A1 (en) Message processing method and apparatus, communication device, and switching circuit
CN111949497A (en) Message queue system and message processing method based on message queue system
CN106921947B (en) Method, device and terminal for adjusting broadcast message queue
CN114885018A (en) Message pushing method, device, equipment and storage medium based on double queues
CN109039953B (en) Bandwidth scheduling method and device
CN101997777B (en) Interruption processing method, device and network equipment
US11516145B2 (en) Packet control method, flow table update method, and node device
WO2018082655A1 (en) Method and device for determining data transmission path
WO2022174444A1 (en) Data stream transmission method and apparatus, and network device
TW201014295A (en) Controlling data flow through a data communications link
CN110769046B (en) Message acquisition method and device, electronic equipment and machine-readable storage medium
CN109660322B (en) Data processing method and device and computer storage medium
CN113765796A (en) Flow forwarding control method and device
CN109062706B (en) Electronic device, method for limiting inter-process communication thereof and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant