CN115695532B - Method and device for processing message by message middleware and computer equipment - Google Patents

Method and device for processing message by message middleware and computer equipment Download PDF

Info

Publication number
CN115695532B
CN115695532B CN202310005221.7A CN202310005221A CN115695532B CN 115695532 B CN115695532 B CN 115695532B CN 202310005221 A CN202310005221 A CN 202310005221A CN 115695532 B CN115695532 B CN 115695532B
Authority
CN
China
Prior art keywords
message
cache list
middleware
processed
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310005221.7A
Other languages
Chinese (zh)
Other versions
CN115695532A (en
Inventor
周文斯
张勇
范阳阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuyun Technology Co ltd
Original Assignee
Shenzhen Zhuyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuyun Technology Co ltd filed Critical Shenzhen Zhuyun Technology Co ltd
Priority to CN202310005221.7A priority Critical patent/CN115695532B/en
Publication of CN115695532A publication Critical patent/CN115695532A/en
Application granted granted Critical
Publication of CN115695532B publication Critical patent/CN115695532B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The disclosure relates to a method, a device and a computer device for processing messages by message middleware. The method comprises the following steps: detecting the running state of the message middleware; responding to the fact that the running state of the message middleware is a fault state, storing messages needing to be sent to the message middleware into a cache list in sequence and in a preset first mode, wherein the cache list is created according to message topics corresponding to the messages and has a corresponding relation with the message topics corresponding to the messages; and sending the cache list to a message processor to indicate the message processor to acquire the message to be processed from the cache list in sequence and in a preset second mode, and processing the message to be processed. By adopting the method, the message can be continuously processed when the message middleware fails, and the message blockage can not occur.

Description

Method and device for processing message by message middleware and computer equipment
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to a method, an apparatus, and a computer device for processing a message by using a message middleware.
Background
At present, message middleware technologies (such as Kafka, rabbitmq and the like) are used in almost all large distributed software systems to realize functions of system component decoupling, service request message peak, asynchronous processing, redundant backup and the like.
However, there are problems with the stability and high availability of message middleware that follow the handling using message middleware. When the message middleware fails and is unavailable, all servers may have connection problems. In addition, the problem caused by the accumulation of the messages cannot be solved, and the processing of all the subsequent messages is blocked.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, and a computer device for processing a message by using message middleware, which can continue to process the message without causing message congestion when the message middleware fails.
In a first aspect, the present disclosure provides a method for processing a message using message middleware. The method comprises the following steps:
detecting the running state of the message middleware;
responding to the fact that the running state of the message middleware is a fault state, storing messages needing to be sent to the message middleware into a cache list in sequence and in a preset first mode, wherein the cache list is created according to message themes corresponding to the messages and has a corresponding relation with the message themes corresponding to the messages;
and sending the cache list to a message processing party to indicate the message processing party to acquire the message to be processed from the cache list in sequence and in a preset second mode, and processing the message to be processed.
In one embodiment, the method further comprises:
responding to the message group sent to the message middleware and needing to be processed by at least two message processors, and creating a corresponding cache list according to each message group, wherein the message group is a message processed by at least two message processors; the cache list has a corresponding relation with the message group.
In one embodiment, the method further comprises:
determining whether the message subject of the message middleware is blocked or not according to the offset of the partition in each message subject;
responding to the condition that the running state of the message middleware is a normal state and the message theme is blocked, and determining whether the messages in the message theme in the message middleware need to be processed in sequence;
responding to the message in the message theme needing to be sequentially processed, and processing the message in the message theme by using the message middleware;
responding to the message in the message theme without sequential processing, and storing the message in the message theme into a cache list according to a sequence and in a preset first mode;
and sending the cache list to a message processor to indicate the message processor to acquire the message to be processed from the cache list in sequence and in a preset second mode, and processing the message to be processed.
In one embodiment, the determining whether the message topic of the message middleware is blocked according to the offset of the partition in each message topic includes:
recording the sending offset when each message in the message subject is sent;
acquiring consumption offset when the message processing party processes each message in the message theme;
and determining whether the message subject is blocked or not according to the difference value of the sending offset and the consumption offset and a preset offset threshold.
In one embodiment, the storing into the cache list in sequence and in a preset first mode includes:
serializing the message, the serializing comprising: converting the message into the same transmission format;
storing the serialized messages in the cache list according to the processing sequence of the messages and in a preset first mode, wherein the preset first mode comprises the following steps: from the head of the cache list or from the tail of the cache list.
In one embodiment, the sending the cache list to a message processor to instruct the message processor to obtain a to-be-processed message from the cache list in sequence and in a preset second manner, and process the to-be-processed message includes:
sending the cache list to a message processor to instruct the message processor to detect the cache list;
responding to the message processing party detecting that the cache list has the message to be processed, the message processing party obtaining the message to be processed from the cache list according to the processing sequence of the message and in a preset second mode, performing deserialization on the message to be processed, and processing the deserialized message to be processed, wherein the preset second mode has a corresponding relation with a preset first mode, and the preset second mode comprises: obtaining from the head of the cache list or obtaining from the tail of the cache list.
In one embodiment, the detecting the operating state of the message middleware includes:
inquiring a return value of the information of at least one partition in the message subject;
and determining the running state of the message middleware according to the return value.
In a second aspect, the present disclosure also provides an apparatus for processing a message using message middleware. The device comprises:
the state detection module is used for detecting the running state of the message middleware;
the message processing module is used for responding to the fact that the running state of the message middleware is a fault state, and storing the messages needing to be sent to the message middleware into a cache list according to the sequence and in a preset first mode, wherein the cache list is created according to the message subjects corresponding to the messages and has a corresponding relation with the message subjects corresponding to the messages;
and the list sending module is used for sending the cache list to a message processing party so as to indicate the message processing party to acquire the message to be processed from the cache list according to the sequence and in a preset second mode and process the message to be processed.
In a third aspect, the present disclosure also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of any of the above method embodiments when executing the computer program.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium. The computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any of the above-mentioned method embodiments.
In a fifth aspect, the present disclosure also provides a computer program product. The computer program product comprising a computer program that when executed by a processor performs the steps of any of the above-described method embodiments.
In the embodiments, when the message middleware fails, the cache list may be used to store the message, and then the cache list may be used to process the message between the message sender and the message processor. The message list is very fast to insert and delete, the time complexity is 0 (1), the message is sent and received by the structure of the cache list, the message is enabled only when Kafka fails, and the situations of message accumulation and blocking can not occur as long as the service consumer continuously processes the message. In addition, by using the cache list mode, once the data in the cache list is processed by a certain consumption processor, the data in the cache list does not exist in the list and cannot be processed by another consumption processor, so that the situation of repeated processing does not exist. And the cache list and the message subject have one-to-one correspondence, and a corresponding cache list can exist for each different message subject, so that the situation that the service logic is disordered in the message processing process can be avoided.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a diagram of an application environment for a method of processing messages using message middleware, in one embodiment;
FIG. 2 is a flow diagram that illustrates a method for processing messages using message middleware in one embodiment;
FIG. 3 is a flow diagram that illustrates the steps of message sequence processing in one embodiment;
FIG. 4 is a flowchart illustrating the step S302 according to an embodiment;
FIG. 5 is a schematic flow chart illustrating the step S204 or the step S310 according to an embodiment;
FIG. 6 is a schematic flow chart illustrating the step S206 or the step S312 according to an embodiment;
FIG. 7 is a flowchart illustrating the step S202 according to one embodiment;
FIG. 8 is a timing diagram illustrating a method for processing a message using message middleware in one embodiment;
FIG. 9 is a block diagram that illustrates the architecture of an apparatus for processing messages using message middleware, according to one embodiment;
FIG. 10 is a diagram showing an internal configuration of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clearly understood, the present disclosure is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present disclosure and are not intended to limit the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims herein and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments herein described are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or device.
In this document, the term "and/or" is only one kind of association relationship describing the associated object, meaning that three kinds of relationships may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
At present, common message middleware supports cluster mode deployment, and generally has a master-slave sharing mode, a master-slave synchronization mode, a multi-master cluster synchronization mode and the like. The cluster mode of the message middleware solves the problem of availability of a single machine, wherein the availability refers to the capacity of the message middleware to provide services to the outside without faults. The requirement on machine resources is high, message middleware is generally deployed under the same subnet, and when a network arc problem occurs, all machines can have connection faults. In addition, the cluster mode cannot solve the problem caused by message accumulation, and when message accumulation occurs in a certain queue, all subsequent messages are blocked. There is also a Pub/Sub model of Redis, i.e., a publish-subscribe model, for messaging between different distributed systems. The publish-subscribe mode may implement a multi-propagation function for messages. Subscribers may subscribe to one or more channels (channels) and publishers may send messages to a specified channel (channel) that all subscribers subscribing to the channel receive. The Pub/Sub of Redis has no grouping mechanism, and if one micro-service starts a plurality of instances, the problem of message repeated consumption exists; the messages cannot be persisted, if the consumer is not online, the messages are lost, the Redis downtime is restarted, and the data of Pub/Sub is also completely lost; without an Ack mechanism, whether the message is successfully sent or not cannot be determined, and the complex service requirement is not met.
Therefore, to solve the above problem, the embodiments of the present disclosure provide a method for processing a message by using message middleware, which can be applied in the application environment as shown in fig. 1. Where message sender 102 communicates with message middleware 104 over a network. When message sender 102 has a message to send, the message is sent to message middleware 104. Message handler 106 listens to message middleware 104 continuously and gets the messages in message middleware 104 in a timely manner. Message sender 102 detects the operational status of message middleware 104. In response to the message sender 102 detecting that the operation status of the message middleware 104 is a failure status, the message sender 102 stores the messages required to be sent to the message middleware in the cache list 108 in sequence and in a first preset manner. The cache list 108 is created by the message sender 102 or the message middleware 104 according to the message topic corresponding to the message, and has a corresponding relationship with the message topic corresponding to the message. The message sender 102 sends the cache list 108 to the message handler 106 to instruct the message handler 106 to monitor the cache list 108, obtain the message to be processed from the cache list 108 in sequence and in a preset second manner, and process the message to be processed. The message sender 102 and the message processor may be, but are not limited to, various personal computers, notebook computers, smart phones, tablet computers, and the like. Message middleware 104 may be implemented with a stand-alone server or a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a method for processing a message by using message middleware is provided, which is described by taking the application of the method to the message sender 102 in fig. 1 as an example, and includes the following steps:
s202, detecting the running state of the message middleware.
The message middleware may be a supporting software system, such as Kafka, that provides synchronous or asynchronous and reliable message transmission for application systems in a network environment based on queue and messaging technology. In some embodiments of the present disclosure, the message middleware may be Kafka, which is a high throughput distributed publish-subscribe messaging system. The running state may be a message middleware connection state and a message service state.
Specifically, the connection status of the node of the current message sender to the Kafka middleware and whether the Kafka message service is available can be continuously detected, so as to obtain the operation status of Kafka.
In some exemplary embodiments, the Kafka health check component implemented based on the Spring Boot Actuator may be used, and the underlying layer calls the Kafka-Clients API to detect the operational status of Kafka. And periodically checking the operation state of Kafka.
S204, responding to the operating state of the message middleware as a fault state, storing the messages needing to be sent to the message middleware into a cache list according to the sequence and in a preset first mode,
the cache list is created according to the message subject corresponding to the message, and the cache list has a corresponding relation with the message subject corresponding to the message. The failure status may generally be a status in which the messaging middleware is unavailable, such as a network problem, or messaging middleware down, etc. The message may generally be a message corresponding to various operations in some embodiments of the present disclosure, such as a login operation, a query operation, and so on. The cache List can be a Redis List, which is a data storage structure-List in Redis, the bottom layer is a compressed List, and when the amount of data in the List is large, the bottom layer has a plurality of compressed lists and is packaged into a bi-directional linked List. The message Topic may typically be Kafka Topic, which is a logical classification of stored messages, which may be considered a message collection, used to group, classify, and create messages by the service system of the message sender. Each Topic (Topic) corresponds to a group of message senders and message handlers, corresponding to a business processing logic. The preset first mode may generally be a mode of adding messages, and the messages are generally processed in sequence, so the preset first mode may be a mode of inserting the messages into the buffer queue according to a certain rule, for example, inserting the messages from the tail of the buffer list to ensure the sequentiality of the messages in the buffer queue. The order may be the order of message processing, and messages that need to be processed first are typically stored offline in a preset first manner.
Specifically, when the operating state of the message middleware is detected to be a fault state, it can be determined that the current message middleware cannot process the message, which may cause the message to be blocked. Therefore, a cache list can be created for each message topic corresponding to each message of the message sender, and the cache list and each message topic are in one-to-one correspondence. Messages sent to the middle of the message (messages that need to be processed by the message processor) may then be stored in the cache list in sequence and in a preset first manner.
In some exemplary embodiments, for example, the messages sent to the message middleware are A1, A2, and B1, where A1 and A2 belong to an a message topic and B1 belongs to a B message topic, the corresponding cache lists to be created may be two cache lists, where one cache list corresponds to the a message topic and the other cache list corresponds to the B message topic.
In other exemplary embodiments, the structure of the cache list may be a Key-Value pair, where a Key is used to indicate each cache list, corresponding to its identity number, and each cache list has a Key. Each Kafka topoic corresponds to a Redis list, and a one-to-one correspondence relationship is formed by defining the Key name of the cache list and the name of the Kafka topoc to be the same in advance.
And S206, sending the cache list to a message processing party to indicate the message processing party to acquire the message to be processed from the cache list in sequence and in a preset second mode and process the message to be processed.
The message to be processed may be generally a message obtained from a cache list, multiple messages may be stored in the cache list, and a message taken out by a message processor each time may be a message to be processed. The preset second manner may be a manner opposite to the first manner, for example, the first manner is to insert a message from the tail of the cache list, and the second manner may be to acquire a message from the head of the cache list, so as to ensure the sequentiality of message processing.
The processing may also be referred to as consumption in the service, and may be a manner of processing a message according to a certain service logic, for example, a message of a user login operation is obtained from a cache list, and a consumption processing party records details of the corresponding user login operation in a log system.
Specifically, after the message sender stores the message in the cache list, the cache list may be sent to the message processor, and the message processor may continuously monitor the change of the message in the cache list. When the message processing party monitors that a new message to be processed or an unprocessed message is generated in the cache list, the message to be processed is acquired according to the sequence of message processing, and the message can be a message to be processed. And the message processing party processes the message to be processed.
In the method for processing messages by using the message middleware, when the message middleware fails, the message can be stored by using the cache list, and then the message between the message sender and the message processor can be processed by using the cache list. The insertion and deletion operations of the message list are very fast, the time complexity is 0 (1), the sending and receiving of the messages are realized by the structure of the cache list, the fault starting is only carried out when Kafka occurs, and the situations of message accumulation and blocking cannot occur as long as the service consumer continuously consumes. In addition, by using the cache list mode, once the data in the cache list is processed by a certain consumption processor, the data in the cache list does not exist in the list and cannot be processed by another consumption processor, so that the situation of repeated processing does not exist. And the cache list and the message subject have one-to-one correspondence relationship, and the corresponding cache list can exist for each different message subject, so that the situation that the service logic is disordered in the message processing process can be avoided.
In one embodiment, the cache list may implement data persistence based on a combination of the Redis AOF and the Redis RDB, the message stored in the cache list may not be lost, and if the cache list fails, the cache list may be reloaded and processed by the consumption processor after being restarted.
Wherein Redis RDB: redis DataBase is a mode for persistence of Redis data, a data set snapshot in a memory is written into a disk within a specified time interval, and a snapshot file is directly read into the memory when the data set snapshot is recovered. Redis AOF: redis application Only File, a way of persistent Redis data, writes the write command into the log additionally each time, re-executes the command in the AOF File when the data needs to be restored.
In one embodiment, the method further comprises: and responding to the message groups sent to the message middleware that at least two message processors are required to process, and creating a corresponding cache list according to each message group.
Wherein the message group is a message processed by at least two of the message processors; and the cache list and the message group have a corresponding relation.
Specifically, when there is a case where at least two message processors process a message sent to the message middleware, that is, a case where the message is repeatedly consumed by message processors of different groups, it may be determined that the message is a message group. And creating a one-to-one corresponding cache list according to each message group.
In some example embodiments, such as message A1, processing requires both a first message handler and a second message handler. If the message A1 belongs to the message topic a, at this time, instead of creating a corresponding cache list for the message topic a, a cache list corresponding to the message A1 is created according to the message A1, and the message A1 is stored in the cache list. When a plurality of message processing parties process the A1 message, the A1 message may be obtained only through the cache list corresponding to the A1 message.
In this embodiment, when one message is processed by a different message processor, there may be a case of repeated consumption. At this time, if the message topic is used to create the corresponding cache queue, if different message processors acquire messages from the cache queue in sequence, service logic may be confused, for example, different message processors all acquire A1 message, but the processing sequences of the A1 message in different message processors are different, a plurality of A1 messages may be stored in the cache queue, and the processing times of different message processors are also different, which may cause that the A1 message cannot be acquired in the specified consumption sequence. Thus, for this case, a cache list corresponding to the consuming group is created, all of the same messages being stored in the list. Therefore, when the message needs to be acquired, the message processing party only needs to directly acquire the message through the cache list corresponding to the consumption group, and the problems that the service logic is disordered and the corresponding message cannot be acquired in the specified consumption sequence are solved.
In one embodiment, as shown in fig. 3, the method further comprises:
s302, determining whether the running state of the message middleware is a normal state.
S304, determining whether the message subject of the message middleware is blocked according to the offset of the partition in each message subject.
The offset is generally understood to be the number of messages sent by a message sender or the number of messages processed by a message processor.
Specifically, there will typically be multiple partitions per message topic, with multiple messages per partition. Whether the message subject of the message middleware is blocked can be determined according to the number of messages sent by the message sender and the number of messages processed by the message processor under each partition in each message subject.
S306, responding to the fact that the running state of the message middleware is a normal state and the message subject is blocked, and determining whether the messages in the message subject in the message middleware need to be processed in sequence.
S308, responding to the message in the message theme needing to be processed in sequence, and using the message middleware to process the message in the message theme.
Specifically, when the operation state of the message middleware is a normal state, it may be determined that the message middleware can normally process the message. However, under the condition of more messages, the message processing party cannot quickly process the messages, and at this time, message accumulation may occur, which causes message subject to be blocked. At this time, whether the messages in the message theme need to be processed in sequence is judged, if the messages need to be processed in sequence, a warning can be sent out at this time to inform that the message theme is blocked, and the messages in the message theme are continuously processed by using the message middleware.
S310, responding to the fact that the messages in the message subjects do not need to be processed in sequence, and storing the messages in the message subjects into a cache list in sequence and in a preset first mode.
Specifically, when the messages in the message topic do not need to be processed in sequence, the messages in the message topic may be stored in the cache list according to a preset first manner through the cache list. And processing the blocked message subject by utilizing the cache list.
And S312, sending the cache list to a message processing party to instruct the message processing party to acquire the message to be processed from the cache list in sequence and in a preset second mode, and processing the message to be processed.
Specifically, for the specific implementation in this embodiment, reference may be made to the processing manner in step S206, which is not repeated herein.
In this embodiment, when the message middleware is in a normal state and a message is blocked, it may be determined whether messages in the message topic that needs to be processed currently need to be processed in sequence, and if the messages do not need to be processed in sequence, the messages in the message topic may be stored through the cache list. The bottom layer data structure in the cache list is a queue, and once some data in the queue is popped up, the data does not exist in the queue, so that the message accumulation and the message blocking cannot occur in a storage mode through the cache list.
In an embodiment, as shown in fig. 4, the determining whether the message topic of the message middleware is blocked according to the offset of the partition in each message topic includes:
s402, recording the transmission offset when each message in the message subject is transmitted;
s404, acquiring consumption offset when the message processing party processes each message in the message theme;
s406, determining whether the message subject is blocked or not according to the difference value of the sending offset and the consumption offset and a preset offset threshold.
The preset offset threshold may be set according to an actual service scenario corresponding to a message topic, and a specific offset threshold is not limited in some embodiments of the present disclosure. I.e. whether congestion occurs is determined according to different traffic scenarios and offsets.
Specifically, the sender of the message may record each message sent, and further obtain the sending offset when sending the message. The message handler may also record each message processed to obtain a consumption offset for processing each message in the message topic. The difference in the offsets, which typically represents the number of messages to be processed by the message middleware, is obtained by subtracting the consumed offset from the sent offset. If the difference value is not reduced within a certain time and is larger than the preset offset threshold value, the message subject is determined to be blocked.
In some exemplary embodiments, the message handler processes each message value as 1 with each message value sent by the message sender as 1. Taking the example of a message sender sending 1000 messages, the sending offset may be 1000. The consumption offset at which the message handler processes the message may be recorded, for example, a consumption offset of 600. A difference between the send offset and the consume offset is 400, which means that there are 400 messages to be processed in the message middleware, for example, if the difference between the offsets reaches a certain value (e.g. 300) within a period of time (e.g. 1 minute), it can be considered that the congestion occurs. It may also be determined that the speed at which a message sender sends a message is greater than the speed at which a message processor processes the message.
In this embodiment, whether a message topic is blocked can be determined by the offset, and if the message topic is blocked, a cache list can be used for processing, so that the problem of message accumulation is avoided.
In one embodiment, as shown in fig. 5, the storing into the cache list in order and in a first preset manner includes:
s502, serializing the message, wherein the serializing comprises: converting the message into the same transmission format;
s504, storing the serialized messages in the cache list according to the processing sequence of the messages and in a preset first mode, wherein the preset first mode comprises the following steps: inserting from the head of the cache list or inserting from the tail of the cache list.
Where serialization can be the process of converting information of a message object into a form that can be stored or transmitted. Because each message is formatted differently, it needs to be serialized into the same format (e.g., byte array) before it can be transmitted over the network. Serialization can be implemented in a variety of ways, and message formats, compressed data sizes, encryption, etc. can be standardized. Specific ways of limiting serialization are not made in some embodiments of the present disclosure.
In particular, when a message needs to be sent to the message middleware, the format of the message is not the same as usual, and thus the details need to be serialized, converted into a format that can be identified and stored by the message middleware or the cache list. After the messages are serialized, the messages are sequentially inserted from the head of the cache list or from the tail of the cache list according to the processing sequence of the messages, so that the sequentiality of message processing is ensured.
In some exemplary embodiments, there are A, B, C, D four messages, for example, in the order of their processing. The obtained cache list may be, in a manner of inserting a message from the cache list header: A. b, C, D. The resulting cache list may be D, C, B, A, inserted from the tail of the cache list.
In this embodiment, by inserting the message into the cache list in the first manner according to the processing data of the message, the sequence of message processing is ensured, and the normal processing service is ensured.
In an embodiment, as shown in fig. 6, the sending the cache list to a message processing party to instruct the message processing party to obtain a message to be processed from the cache list in sequence and in a preset second manner, and processing the message to be processed includes:
s602, the cache list is sent to a message processing party to indicate the message processing party to detect the cache list.
S604, responding to the message processing party detecting that the cache list has the message to be processed, the message processing party obtains the message to be processed from the cache list according to the processing sequence of the message and in a preset second mode, deserializes the message to be processed, processes the deserialized message to be processed,
the preset second mode and the preset first mode have a corresponding relation, and the preset second mode comprises the following steps: obtaining from the head of the cache list or obtaining from the tail of the cache list. For example, when the first manner is to insert from the tail of the cache list, the second manner may be to obtain from the tail of the cache list in a normal case, and when the first manner is to insert from the head of the cache list, the second manner may be to obtain from the head of the cache list, so as to guarantee the sequence of message processing. Deserialization generally corresponds to a serialization manner, for example, when the serialization manner is data encryption, the deserialization manner may be data decryption.
Specifically, after the message sender stores the message in the cache list, the message sender may send the cache list or a notification message to the message handler to instruct the message handler to detect whether the message to be processed exists in the cache list. If the message processing party detects that the message to be processed exists in the cache list, the message to be processed is obtained from the cache list according to the processing sequence of the message and in a second mode corresponding to the first mode, and the message to be processed is serialized in the transmission process in general, so that the message to be processed can be deserialized according to the serialization mode and converted into a message format capable of being processed by the message processing party, and then the message processing party processes the deserialized message.
In some exemplary embodiments, the first manner may be in a manner of LPUSH and the second manner may be in a manner of BRPOP.
In this embodiment, the second mode corresponding to the first mode is used to obtain the message from the cache list, so that the sequentiality during message processing can be ensured, service logic confusion does not occur, and the normal service processing can be ensured.
In one embodiment, as shown in fig. 7, the detecting the operating status of the message middleware includes:
s702, inquiring the return value of the information of at least one partition in the message subject.
S704, determining the running state of the message middleware according to the return value.
Where the return value may typically be descriptive information of the partition in each message topic.
Specifically, the message middleware is connected through an interface of the message middleware, and description information of information under a partition in each message topic in the message middleware is acquired. And comparing the description information with preset standard description information, and if the description information is the same as the preset standard description information, determining that the running state of the message middleware is a normal state. If the preset standard description information is different and the difference is larger, the operation state of the message middleware can be determined to be a fault state.
In this embodiment, whether communication abnormality occurs is determined by requesting an interface of the Kafka middleware, and if the network connection is overtime and the interface return value is different from the expected result, the communication abnormality can be determined to be a fault state, so that the running state of the Kafka middleware can be quickly determined, and further the processing can be performed in time, and the problem of message blocking cannot occur.
In one embodiment, as shown in fig. 8, a health check method health () of Kafka is called by a multi-threaded scheduling task polling inside the fault detection component, and the method traverses all Kafka topic and determines whether Kafka has a fault by reading all partition information in real time (i.e., determines whether the state of Kafka is a fault state); if Kafka does not fail, it is determined whether a Kafka topoic has a block by reading the offset of the partition. And maintaining and updating the operation state of Kafka in the fault detection component, wherein the normal state is UP, and the fault state is DOWN.
And before sending the message, the message sender utilizes the fault detection component to judge the operation state of the Kafka. If Kafka is in the normal state, there is no fault. Then the fault detection component is used again to judge whether the Topic to be sent is blocked or not, if not, the message is directly sent to Kafka. If Kafka is in failure state, the message is serialized and then sent to the cache list by using LPUSH.
And the message processing party simultaneously monitors messages in the cache list and the cache list in Kafka, and when a message to be processed appears in any one of the cache list and the cache list, the message to be processed is obtained and processed.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present disclosure further provides a device for processing a message by using message middleware, which is used for implementing the above method for processing a message by using message middleware. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so that specific limitations in one or more embodiments of the device for processing a message by using message middleware provided below may refer to the limitations in the above method for processing a message by using message middleware, and are not described herein again.
In one embodiment, as shown in fig. 9, there is provided an apparatus 800 for processing a message using message middleware, comprising: a status detection module 802, a message processing module 804, and a list sending module 806, wherein:
a status detection module 802, configured to detect an operating status of the message middleware;
the message processing module 804 is configured to, in response to that the operation state of the message middleware is a fault state, store messages that need to be sent to the message middleware in a cache list in sequence and in a preset first manner, where the cache list is created according to a message topic corresponding to the message, and has a correspondence with the message topic corresponding to the message;
a list sending module 806, configured to send the cache list to a message processor, so as to instruct the message processor to obtain, according to the order and in a preset second manner, a to-be-processed message from the cache list, and process the to-be-processed message.
In one embodiment of the apparatus, the apparatus further comprises: a cache list generation module, configured to respond that a message group sent to the message middleware needs to be processed by at least two message processors, and create a corresponding cache list according to each message group, where the message group is a message processed by at least two message processors; the cache list has a corresponding relation with the message group.
In one embodiment of the apparatus, the apparatus further comprises: and the blocking determining module is used for determining whether the message subject of the message middleware is blocked according to the offset of the partition in each message subject.
The sequence processing module is used for responding to the situation that the running state of the message middleware is a normal state and the message theme is blocked, and determining whether the messages in the message theme in the message middleware need to be processed in sequence or not;
responding to the message in the message theme needing to be sequentially processed, and processing the message in the message theme by using the message middleware; and responding to the fact that the messages in the message subjects do not need to be processed in sequence, and storing the messages in the message subjects into a cache list in sequence and in a preset first mode.
In one embodiment of the apparatus, the congestion determination module comprises: and the sending offset determining module is used for recording the sending offset of each message in the message subject when the message is sent.
And the consumption offset acquisition module is used for acquiring the consumption offset when the message processing party processes each message in the message theme.
And the blocking determining submodule is used for determining whether the message subject is blocked or not according to the difference value of the sending offset and the consumption offset and a preset offset threshold.
In an embodiment of the apparatus, the message processing module 804 includes:
a serialization module to serialize the message, the serialization comprising: the messages are converted to the same transmission format.
A storage module, configured to store the serialized messages in the cache list according to a processing order of the messages and in a preset first manner, where the preset first manner includes: from the head of the cache list or from the tail of the cache list.
In an embodiment of the apparatus, the list sending module 806 includes: the sending submodule is used for sending the cache list to a message processor to indicate the message processor to detect the cache list, responding to the message processor detecting that a message to be processed exists in the cache list, the message processor obtaining the message to be processed from the cache list according to a message processing sequence and in a preset second mode, deserializing the message to be processed, and processing the deserialized message to be processed, wherein the preset second mode and the preset first mode have a corresponding relation, and the preset second mode comprises: obtaining from the head of the cache list or obtaining from the tail of the cache list.
In one embodiment of the apparatus, the status detection module 802 includes: and the return value query module is used for querying the return value of the information of at least one partition in the message subject.
And the return value determining module is used for determining the running state of the message middleware according to the return value.
The respective modules in the above-described apparatus for processing a message using message middleware may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing message data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for processing a message using message middleware.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory in which a computer program is stored and a processor which, when executing the computer program, carries out the steps of any of the above method embodiments.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of any of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of any of the above-described method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, databases, or other media used in the embodiments provided by the present disclosure may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example. The databases involved in embodiments provided by the present disclosure may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided in this disclosure may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic, quantum computing based data processing logic, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present disclosure, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present disclosure. It should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the concept of the present disclosure, and these changes and modifications are all within the scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the appended claims.

Claims (10)

1. A method for processing a message using message middleware, the method comprising:
detecting the running state of the message middleware;
responding to the fact that the running state of the message middleware is a fault state, storing messages needing to be sent to the message middleware into a cache list in sequence and in a preset first mode, wherein the cache list is created according to message themes corresponding to the messages and has a corresponding relation with the message themes corresponding to the messages;
and sending the cache list to a message processor to indicate the message processor to acquire the message to be processed from the cache list in sequence and in a preset second mode, and processing the message to be processed.
2. The method of claim 1, further comprising:
responding to the message group sent to the message middleware and needing to be processed by at least two message processors, and creating a corresponding cache list according to each message group, wherein the message group is a message processed by at least two message processors; and the cache list and the message group have a corresponding relation.
3. The method of claim 1, further comprising:
determining whether the message subject of the message middleware is blocked or not according to the offset of the partition in each message subject;
responding to the condition that the running state of the message middleware is a normal state and the message theme is blocked, and determining whether the messages in the message theme in the message middleware need to be processed in sequence;
responding to the message in the message theme needing to be sequentially processed, and processing the message in the message theme by using the message middleware;
responding to the message in the message theme without sequential processing, and storing the message in the message theme into a cache list according to a sequence and in a preset first mode;
and sending the cache list to a message processor to indicate the message processor to acquire the message to be processed from the cache list in sequence and in a preset second mode, and processing the message to be processed.
4. The method of claim 3, wherein the determining whether the message topics of the message middleware are blocked according to the offset of the partition in each message topic comprises:
recording the transmission offset of each message in the message subject when the message is transmitted;
acquiring consumption offset when the message processing party processes each message in the message theme;
and determining whether the message subject is blocked or not according to the difference value of the sending offset and the consumption offset and a preset offset threshold.
5. The method according to claim 1 or 3, wherein the storing into the cache list in sequence and in a first preset manner comprises:
serializing the message, the serializing comprising: converting the message into the same transmission format;
storing the serialized messages in the cache list according to the processing sequence of the messages and in a preset first mode, wherein the preset first mode comprises the following steps: from the head of the cache list or from the tail of the cache list.
6. The method according to claim 5, wherein the sending the cache list to a message handler for instructing the message handler to obtain the message to be processed from the cache list in sequence and in a preset second manner, and processing the message to be processed includes:
sending the cache list to a message processor to instruct the message processor to detect the cache list;
responding to the message processing party detecting that the cache list has the message to be processed, the message processing party obtaining the message to be processed from the cache list according to the processing sequence of the message and in a preset second mode, performing deserialization on the message to be processed, and processing the deserialized message to be processed, wherein the preset second mode has a corresponding relation with a preset first mode, and the preset second mode comprises: obtaining from the head of the cache list or obtaining from the tail of the cache list.
7. The method of claim 1, wherein detecting the operational status of the message middleware comprises:
inquiring a return value of the information of at least one partition in the message subject;
and determining the running state of the message middleware according to the return value.
8. An apparatus for processing a message using message middleware, the apparatus comprising:
the state detection module is used for detecting the running state of the message middleware;
the message processing module is used for responding to the fact that the running state of the message middleware is a fault state, and storing the messages needing to be sent to the message middleware into a cache list according to the sequence and in a preset first mode, wherein the cache list is created according to the message subjects corresponding to the messages and has a corresponding relation with the message subjects corresponding to the messages;
and the list sending module is used for sending the cache list to a message processing party so as to indicate the message processing party to acquire the message to be processed from the cache list according to the sequence and in a preset second mode and process the message to be processed.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202310005221.7A 2023-01-04 2023-01-04 Method and device for processing message by message middleware and computer equipment Active CN115695532B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310005221.7A CN115695532B (en) 2023-01-04 2023-01-04 Method and device for processing message by message middleware and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310005221.7A CN115695532B (en) 2023-01-04 2023-01-04 Method and device for processing message by message middleware and computer equipment

Publications (2)

Publication Number Publication Date
CN115695532A CN115695532A (en) 2023-02-03
CN115695532B true CN115695532B (en) 2023-03-10

Family

ID=85057280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310005221.7A Active CN115695532B (en) 2023-01-04 2023-01-04 Method and device for processing message by message middleware and computer equipment

Country Status (1)

Country Link
CN (1) CN115695532B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117390337B (en) * 2023-12-11 2024-04-26 宁德时代新能源科技股份有限公司 Message sending method, device, middleware and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105610926A (en) * 2015-12-22 2016-05-25 广州唯品会信息科技有限公司 Message transmitting method and system and message middleware system
CN106815338A (en) * 2016-12-25 2017-06-09 北京中海投资管理有限公司 A kind of real-time storage of big data, treatment and inquiry system
CN108563425A (en) * 2018-02-27 2018-09-21 北京邮电大学 A kind of event driven multipaths coprocessing system
CN109067844A (en) * 2018-07-09 2018-12-21 上海瀚银信息技术有限公司 A kind of message communication system
CN109669821A (en) * 2018-11-16 2019-04-23 深圳证券交易所 Cluster partial fault restoration methods, server and the storage medium of message-oriented middleware
CN112181683A (en) * 2020-09-27 2021-01-05 中国银联股份有限公司 Concurrent consumption method and device for message middleware
CN114884975A (en) * 2022-04-29 2022-08-09 青岛海尔科技有限公司 Service message processing method and device, storage medium and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10554604B1 (en) * 2017-01-04 2020-02-04 Sprint Communications Company L.P. Low-load message queue scaling using ephemeral logical message topics

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105610926A (en) * 2015-12-22 2016-05-25 广州唯品会信息科技有限公司 Message transmitting method and system and message middleware system
CN106815338A (en) * 2016-12-25 2017-06-09 北京中海投资管理有限公司 A kind of real-time storage of big data, treatment and inquiry system
CN108563425A (en) * 2018-02-27 2018-09-21 北京邮电大学 A kind of event driven multipaths coprocessing system
CN109067844A (en) * 2018-07-09 2018-12-21 上海瀚银信息技术有限公司 A kind of message communication system
CN109669821A (en) * 2018-11-16 2019-04-23 深圳证券交易所 Cluster partial fault restoration methods, server and the storage medium of message-oriented middleware
CN112181683A (en) * 2020-09-27 2021-01-05 中国银联股份有限公司 Concurrent consumption method and device for message middleware
CN114884975A (en) * 2022-04-29 2022-08-09 青岛海尔科技有限公司 Service message processing method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN115695532A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
US11397647B2 (en) Hot backup system, hot backup method, and computer device
US20210200681A1 (en) Data storage method and apparatus, and server
CN102088490B (en) Data storage method, device and system
CN113452774B (en) Message pushing method, device, equipment and storage medium
CN115695532B (en) Method and device for processing message by message middleware and computer equipment
CN111198662B (en) Data storage method, device and computer readable storage medium
CN111641700B (en) Ceph object-based management and retrieval implementation method for storage metadata
CN114827171B (en) Information synchronization method, apparatus, computer device and storage medium
CN114253743A (en) Message synchronization method, device, node and readable storage medium
CN113687790A (en) Data reconstruction method, device, equipment and storage medium
CN108512753B (en) Method and device for transmitting messages in cluster file system
CN113489149B (en) Power grid monitoring system service master node selection method based on real-time state sensing
CN112865927B (en) Message delivery verification method, device, computer equipment and storage medium
CN112764679A (en) Dynamic capacity expansion method and terminal
CN116048878A (en) Business service recovery method, device and computer equipment
CN113596195B (en) Public IP address management method, device, main node and storage medium
CN113918364A (en) Redis-based lightweight message queue processing method and device
CN106844480B (en) A kind of cleaning comparison storage method
CN114661526A (en) Data backup method and device
CN111934909B (en) Main-standby machine IP resource switching method, device, computer equipment and storage medium
CN114490188A (en) Method and device for synchronizing main database and standby database
CN112488462A (en) Unified pushing method, device and medium for workflow data
US11874821B2 (en) Block aggregation for shared streams
CN114666401B (en) Device information processing method, device, computer device and storage medium
CN117478299B (en) Block chain consensus algorithm switching method, device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant