CN111585867B - Message processing method and device, electronic equipment and readable storage medium - Google Patents

Message processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111585867B
CN111585867B CN202010246268.9A CN202010246268A CN111585867B CN 111585867 B CN111585867 B CN 111585867B CN 202010246268 A CN202010246268 A CN 202010246268A CN 111585867 B CN111585867 B CN 111585867B
Authority
CN
China
Prior art keywords
message
target
server
service
messages
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010246268.9A
Other languages
Chinese (zh)
Other versions
CN111585867A (en
Inventor
王杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202010246268.9A priority Critical patent/CN111585867B/en
Publication of CN111585867A publication Critical patent/CN111585867A/en
Application granted granted Critical
Publication of CN111585867B publication Critical patent/CN111585867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/18Commands or executable codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/146Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The embodiment of the invention provides a message processing method, a message processing device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: receiving a first message sent by a production server, storing the first message in a message queue corresponding to the service type of the first message, acquiring M first messages from the message queue, writing each first message into a first memory buffer queue corresponding to a first target service identifier by adopting M first threads, determining a first target service address associated with the first target service identifier, reading N first messages in the first memory buffer queue by adopting N second threads, sending N first request messages to a target server corresponding to the first target service address, and calling a service corresponding to the target server for N times to process the N first messages. Therefore, the method can realize faster reading operation, avoid backlog of the messages in the message queue to a certain extent and improve the consumption efficiency of the messages in the message queue.

Description

Message processing method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a message processing method and apparatus, an electronic device, and a readable storage medium.
Background
Because the system based on the micro-service architecture has the characteristics of easy development and maintenance and the like, the system is applied to the development of larger projects at present. Meanwhile, in order to realize the functions of coupling between the micro-services, asynchronous message processing, current limiting and peak clipping and the like, a production consumption mode is generally applied.
The production and consumption mode is specifically as follows: the server (production server) for issuing the Message issues the Message to the Message Queue (MQ) in the production server cluster, one Message Queue is a buffer area, the server (consumption server) for consuming the Message takes out the Message from the Message Queue, processes the Message, and continues to take out the next Message after processing the Message. Wherein, each storage server in the production server cluster can be provided with one message queue or a plurality of message queues.
However, when a large number of messages in the message queue are held up for some reason (e.g., the consuming server takes a long time to process the message), the producing server is required to wait for a while (i.e., block the producer thread), thereby affecting the consumption efficiency of the messages in the message queue.
Disclosure of Invention
Embodiments of the present invention provide a message processing method, a message processing apparatus, an electronic device, and a readable storage medium, so as to improve consumption efficiency of messages in a message queue. The specific technical scheme is as follows:
in a first aspect of the present invention, there is provided a message processing method, executed in a buffering server, including:
receiving a first message sent by a production server, and storing the first message in a message queue corresponding to the service type of the first message;
acquiring M first messages from the message queue, and writing the M first messages into a first memory buffer queue corresponding to a first target service identifier by adopting M first threads, wherein the first target service identifier is an identifier corresponding to the service type of the first message, and M is an integer greater than or equal to 1;
determining a first target service address associated with the first target service identifier;
and reading N first messages in the first memory buffer queue by adopting N second threads, and sending N first request messages to a target server corresponding to the first target service address so as to call a service corresponding to the target server for N times to process the N first messages, wherein each first request message comprises one first message, and N is an integer larger than M.
In a second aspect of the present invention, there is also provided a message processing apparatus, provided in a buffering server, including:
the receiving module is used for receiving a first message sent by a production server and storing the first message in a message queue corresponding to the service type of the first message;
an obtaining module, configured to obtain M first messages from the message queue, and write the M first messages into a first memory buffer queue corresponding to a first target service identifier by using M first threads, where the first target service identifier is an identifier corresponding to a service type of the first message, and M is an integer greater than or equal to 1;
a determining module, configured to determine a first target service address associated with the first target service identifier;
a sending module, configured to read N first messages in the first memory buffer queue by using N second threads, and send N first request messages to a target server corresponding to the first target service address, so as to call a service corresponding to the target server for N times to process the N first messages, where each first request message includes one first message, and N is an integer greater than M.
In yet another aspect of the present invention, there is also provided a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to execute any of the above-described message processing methods.
In yet another aspect of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the message processing methods described above.
The message processing method provided in the embodiment of the present invention includes receiving a first message sent by a production server, storing the first message in a message queue corresponding to a service type of the first message, obtaining M first messages from the message queue, writing the M first messages into a first memory buffer queue corresponding to a first target service identifier by using M first threads, determining a first target service address associated with the first target service identifier, reading N first messages in the first memory buffer queue by using N second threads, and sending N first request messages to a target server corresponding to the first target service address to call a service corresponding to the target server N times to process the N first messages. Since N is greater than M, that is, the number of second threads reading the first message from the first memory buffer queue is greater than the number of first threads writing the first message into the first memory buffer queue, so that a faster reading operation can be achieved, that is, the speed of reading the first message from the first memory buffer queue is greater than the speed of writing the first message into the first memory buffer queue, thereby avoiding backlog of the first message in the first memory buffer queue, and further ensuring that the buffer server can continuously obtain the first message from the message queue and write the obtained first message into the first memory buffer queue. And the buffer server is not responsible for processing the first message, but the service corresponding to the target server processes the first message, that is, after the buffer server sends N first request messages to the target server, the buffer server may then take the first message from the first memory buffer queue without waiting for whether the target server has processed the first message. Therefore, the consumption efficiency of the messages in the message queue can be further improved to a certain extent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a diagram of a system architecture of the prior art;
FIG. 2 is a system architecture diagram provided in an embodiment of the present invention;
fig. 3 is a flowchart illustrating steps of a message processing method according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating steps of another message processing method provided in an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a message processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
Referring to fig. 1, fig. 1 is a diagram of a system architecture in the prior art, and the system architecture includes a production server, a storage server, a consumption server 1, and a consumption server 2 as an example. The production server publishes the messages to the message queues of the storage server, the message queues of the storage server can be at least one, each message queue stores messages of one service type, each service type is a service scene, for example, the message queue 1 stores messages about agreeing on videos, and the message queue 2 stores messages about commenting on videos. In the prior art, if the consumption server 1 is configured to fetch a message from the message queue 1 and the consumption server 2 is configured to fetch a message from the message queue 2, the consumption server 1 listens to the message in the message queue 1, and if the consumption server 1 takes a long time to process the message due to some reason (for example, the consumption server has a high central processor occupancy and a high memory occupancy, etc.), the consumption server may backlog the message in the message queue 1, thereby affecting the consumption efficiency of the message in the message queue 1. In addition, in the case of message backlog in the message queue 1, the production server needs to suspend waiting, i.e. blocking the producer thread, and suspend issuing the message into the message queue 1, thereby affecting the consumption efficiency of the message in the message queue.
In order to solve the above technical problems, embodiments of the present invention provide a system architecture diagram. Referring to fig. 2, fig. 2 is a system architecture diagram provided in an embodiment of the present invention. The system comprises a production server, a buffer server, a server 1, a server 2, a service server 1, a service server 2, a service server 3 and a service server 4. The service server 1 and the service server 2 form a cluster 1, and the service server 3 and the service server 4 form a cluster 2. In practical application, the number of the production servers and the number of the buffer servers can be multiple, wherein a plurality of production servers form a production server cluster, and a plurality of buffer servers form a buffer server cluster. The production server can send a produced message of a certain service type to a message queue corresponding to the service type of the buffer server, a write thread of the buffer server writes the message in the message queue into a memory buffer queue corresponding to the service type, when a read thread of the buffer server monitors the message in the memory buffer message queue, the message can be obtained from the memory buffer queue and sent to a target server, and the target server can be a server 1 or a server 2.
Specifically, a buffering server receives a first message sent by a production server, and stores the first message in a message queue corresponding to the service type of the first message; acquiring M first messages from the message queue, and writing the M first messages into a first memory buffer queue corresponding to a first target service identifier by adopting M first threads, wherein the first target service identifier is an identifier corresponding to the service type of the first message; determining a first target service address corresponding to the first target service identifier; and reading N first messages in the first memory buffer queue by adopting N second threads, and sending N first request messages to a target server corresponding to a first target service address so as to call a target service corresponding to the target server for N times to process the N messages, wherein each first request message comprises one message, and N is an integer larger than M.
It should be noted that, the buffer server may store an association relationship between the service identifier and the service address, or store an association relationship between the service identifier and the service address in another database server, for example, as shown in table 1 below, the association relationship between the service identifier and the service address is shown in table 1. Service identification 1 is associated with service address 1 and service identification 2 is associated with service address 2.
Figure BDA0002434061380000051
TABLE 1
The buffer server may determine the first target service address associated with the first target service identifier according to the locally stored association relationship or an association relationship stored in another data database server. If the service address of the server 1 is the service address 1, the service address of the server 2 is the service address 2, and the first target service identifier is the service identifier 1, the first target service address associated with the first target service identifier is the service address 1, and the server 1 is the target server.
The buffering server reads the N first messages in the first memory buffering queue by using the N second threads, and sends the N first messages to the target server (server 1), and correspondingly, the server 1 receives the N first request messages sent by the buffering server, and can call the target service corresponding to the server 1 to process the first messages included in the N first request messages.
It should be noted that, a service may be deployed on the server 1, or one service may be deployed on the business server 1 and one service may be deployed on the business server 2, where the service is, for example, a service that processes messages of the like type. When the service is deployed on the server 1, the service is a service corresponding to the server 1. Alternatively, the server 1 is not deployed with the service, the service on the service server 1 and the service on the service server 2 are both services corresponding to the server 1, and the server 1 may be regarded as a distribution server, that is, the server 1 may distribute a message to the service server 1 or distribute a message to the service server 2. When the server 1 distributes the message to the service server 1, the message is processed by the service deployed on the service server 1, and when the server 1 distributes the message to the service server 2, the message is processed by the service deployed on the service server 2.
In summary, in fig. 2 provided in this embodiment, compared to the system architecture diagram in the prior art, a first memory buffer queue is added in the buffer server, and the buffer server can obtain M first messages from the message queue, and writing the M first messages to the first memory buffer queue using the M first threads (the first thread being a write thread), and further threads (N second threads, the second thread being a read thread) are employed to read N first messages from the first memory buffer queue, thereby realizing faster reading operation, avoiding backlog of the first message in the first memory buffer queue, and further, the buffering server can be ensured to continuously acquire the first message from the message queue and write the acquired first message into the first memory buffering queue, so that, the backlog of the first message in the message queue can be avoided, and the consumption efficiency of the first message in the message queue is improved. And the buffer server sends N first request messages to the target server corresponding to the first target service address to call the service corresponding to the target server for N times to process the N first messages, namely the buffer server is not responsible for processing the first messages, but calls the service corresponding to the target server to process the N first messages, so that the problem that a consumption server in the prior art not only takes messages from a message queue, but also needs to process the taken messages and can take one message after the completion of processing is solved, but the message backlog in the message queue is caused by the consumption server under the conditions of high CPU occupancy rate and high memory occupancy rate.
In the embodiment of the present invention, the speed of reading the first message from the first memory buffer queue is faster than the speed of writing the first message into the first memory buffer queue, so as to avoid backlog of the first message in the first memory buffer queue, and further ensure that the buffer server can continuously obtain the first message from the message queue and write the obtained first message into the first memory buffer queue. And the buffer server is not responsible for processing the first message, but the service corresponding to the target server processes the first message, that is, after the buffer server sends N first request messages to the target server, the buffer server may then take the first message from the first memory buffer queue without waiting for whether the target server has processed the first message. Therefore, the consumption efficiency of the messages in the message queue can be further improved to a certain extent.
Based on fig. 2, an embodiment of the present invention provides a message processing method. Referring to fig. 3, fig. 3 is a flowchart illustrating steps of a message processing method provided in an embodiment of the present invention, where the method may be executed by a cache server, and the method includes the following steps:
step 301, receiving a first message sent by a production server, and storing the first message in a message queue corresponding to a service type of the first message.
Step 302, obtaining M first messages from the message queue, and writing the M first messages into a first memory buffer queue corresponding to a first target service identifier by using M first threads, where the first target service identifier is an identifier corresponding to a service type of the first message, and M is an integer greater than or equal to 1.
The message of each service type corresponds to a service identifier, and the buffer server can write the acquired M messages into the first memory buffer queue by using M first threads at the same time. The first memory buffer queue, i.e. the memory-level buffer queue, is written into the first memory buffer queue, so that more threads can be used to read the first message from the first memory buffer queue in step 304, that is, the number of the second threads is greater than the number of the first threads, thereby achieving faster reading of the first message stored in the first memory buffer queue.
Step 303, determining a first target service address corresponding to the first target service identifier.
Step 304, reading N first messages in the first memory buffer queue by using N second threads, and sending N first request messages to the target server corresponding to the first target service address, so as to call the service corresponding to the target server for N times to process the N first messages.
Each first request message comprises a first message, and N is an integer larger than M.
Each second thread reads one first message in the first memory buffer queue, so that the N first messages in the first memory buffer queue can be simultaneously read by the N second threads, and the N first request messages can be simultaneously sent to the target server by the N second threads, so as to simultaneously call the service corresponding to the target server for N times to process the N messages.
In the message processing method provided in this embodiment, a first message sent by a production server is received, and the first message is stored in a message queue corresponding to a service type of the first message, M first messages are obtained from the message queue, M first messages are written into a first memory buffer queue corresponding to a first target service identifier by using M first threads, a first target service address associated with the first target service identifier is determined, N first messages in the first memory buffer queue are read by using N second threads, and N first request messages are sent to a target server corresponding to the first target service address, so that services corresponding to N target servers are called to process the N first messages. Since N is greater than M, that is, the number of second threads reading the first message from the first memory buffer queue is greater than the number of first threads writing the first message into the first memory buffer queue, so that a faster reading operation can be achieved, that is, the speed of reading the first message from the first memory buffer queue is greater than the speed of writing the first message into the first memory buffer queue, thereby avoiding backlog of the first message in the first memory buffer queue, and further ensuring that the buffer server can continuously obtain the first message from the message queue and write the obtained first message into the first memory buffer queue. And the buffer server is not responsible for processing the first message, but the service corresponding to the target server processes the first message, that is, after the buffer server sends N first request messages to the target server, the buffer server may then take the first message from the first memory buffer queue without waiting for whether the target server has processed the first message. Therefore, the consumption efficiency of the messages in the message queue can be further improved to a certain extent.
Based on fig. 2, the present embodiment provides another message processing method, and referring to fig. 4, fig. 4 is a flowchart of steps of another message processing method provided in the embodiment of the present invention, where the method includes the following steps:
step 401, the server stores the service type, the service identifier corresponding to the service type, the buffering service address, the queue identifier and the service address corresponding to the service identifier, and the user information corresponding to the buffering service address into a database.
Each service type, a service identifier corresponding to the service type, and a buffering service address, a queue identifier and a service address corresponding to each service identifier, and user information corresponding to each buffering service address may be stored in a database. For example, the service type, the service identifier corresponding to the service type, the buffering service address corresponding to the service identifier, and the user information corresponding to the buffering service address may be stored in table a of the database, the service identifier and the queue identifier corresponding to the service identifier may be stored in table B of the database, the service identifier and the service address corresponding to the service identifier may be stored in table C of the database, or the above information may be stored in one table of the database. The user information may include a user name and a password, among others. The database may be deployed at the production server or separately at one server. Each service type corresponds to a service identifier, the service identifiers corresponding to the service types are different, and the service types comprise a praise service type, a comment service type and the like.
It should be noted that, the server in step 401 may be a production server, that is, a database may be deployed on the production server. Or the database is separately deployed in a database server, and when the database is separately deployed in the database server, the server in step 401 is the database server. The production server may retrieve the above-mentioned information stored in the database from the database server.
Step 402, the production server produces a first message.
Step 403, the production server determines at least one target buffering service address and target queue identifier corresponding to the target service type and user information corresponding to the target buffering service address according to the target service type of the first message.
The target buffering service address corresponding to the target service type may be one or more.
In the prior art, the address of the storage server and the message queue identifier corresponding to the message queue of the storage server need to be configured in the configuration file of the production server. The production server sends the message to the message queue corresponding to the message queue identifier in the storage server according to the local configuration file, so that the problem of configuration error exists, and the message is lost after the production server is online if the configuration error occurs. In this embodiment, the message queue identifier corresponding to the message queue of the production server does not need to be configured through the configuration file, but each buffering service address corresponding to each service type, the queue identifier corresponding to the service type, and the user information corresponding to the buffering service address are stored in the database, and the sending and receiving of the message are managed in a centralized manner, so that the problem of configuration errors is avoided to a certain extent.
It should be noted that the production server may obtain the data in the table a and the table B, so as to determine at least one target buffering service address corresponding to the target service type according to the service type, the service identifier corresponding to the service type, and the buffering service address corresponding to the service identifier; determining a target queue identification corresponding to the target service type according to the service identification corresponding to the service type and the queue identification corresponding to the service identification; and determining the user information corresponding to the target buffering service address according to the buffering service address and the user information corresponding to the buffering service address. Therefore, at least one target buffer service address and target queue identification corresponding to the target service type and user information corresponding to the target buffer service address can be obtained from the database.
Step 404, the production server establishes a communication connection with a target buffer server corresponding to the target buffer service address according to the user information.
The production server may send user information (the user information includes, for example, a user name and password information) to the target buffering server corresponding to the target buffering service address, and the target buffering server may verify whether the production server has the authority to establish a communication connection with the target buffering server according to the user information. And in the case that the production server has the authority to establish the communication connection with the target buffer server, establishing the communication connection with the production server. After the target buffer server establishes communication connection with the production server, the target buffer server stores the first message sent by the production server in a message queue, and when monitoring the message in the message queue, the buffer server can take out the message from the message queue.
It should be noted that, if the resource of the target buffer server is short due to the high CPU occupancy and the high memory occupancy of the target buffer server, the buffer service address corresponding to the target service type may be added in the database, that is, one service type may correspond to multiple buffer service addresses, so as to correspond to multiple buffer servers. For example, the buffering service address corresponding to the target service type includes a buffering service address 1 and a buffering service address 2, and both the buffering service address 1 and the buffering service address 2 are the target buffering service addresses, so that both the buffering server 1 corresponding to the buffering service address 1 and the buffering server 2 corresponding to the buffering service address 2 can receive the first message sent by the production server, store the received first message in the message queue, and take out the message from the respective message queue, thereby improving the efficiency of taking out the message from the message queue to a certain extent, that is, improving the consumption efficiency of the message in the message queue.
In the prior art, when the resource shortage of the consumption server causes the overstock of the messages in the message queue, the production server is required to be suspended to produce the messages. Or adding a consumption server to consume the messages in the message queue. In this embodiment, when the target buffering server is in a resource shortage, only the buffering service address corresponding to the service type needs to be added to the database, for example, the buffering service address 1 corresponding to the service type 1, the buffering service address 2 corresponding to the service type 1, that is, the buffering service address 1 and the buffering service address 2 corresponding to the service type 1, may be added to the database. There is no need to suspend production of messages by the production server or to add new buffering servers.
Step 405, the production server generates a target service identifier according to the target buffer service address, the target queue identifier, and the user information.
Step 406, the production server sends the first message and the target service identifier to the target buffering server, so that the target buffering server stores the first message in a message queue corresponding to the target service identifier.
Correspondingly, the target buffer server receives the first message and stores the first message in the message queue corresponding to the target service identifier, and since the target service identifier corresponds to the service type of the first message, the first message is also stored in the message queue corresponding to the service type of the first message.
Step 407, the buffer server obtains M first messages from the message queue, and writes the M first messages into a first memory buffer queue corresponding to the first target service identifier by using M first threads.
The caching servers in this step and in the following steps may be considered target caching servers.
Step 408, the buffering server determines a first target service address associated with the first target service identification.
Wherein, can also include the following step:
the buffer server may periodically obtain an association relationship between the service identifier and the service address, and cache the association relationship between the service identifier and the service address in a buffer, where each service identifier is associated with a different service address. Specifically, the buffering server may periodically obtain the association relationship between the service identifier and the service address from the database, for example, periodically obtain the association relationship between the service identifier and the service address from table C of the database.
Accordingly, step 408 may be implemented by:
and determining a first target service address associated with the first target service identifier according to the association relation between the service identifier and the service address cached in the buffer area.
It should be noted that the first target service address associated with the first target service identifier may also be determined according to the service identifier, the service address, and the association relationship between the service identifier and the service address stored in the data database. The service identifier, the service address and the association relation between the service identifier and the service address are cached in the buffer area, so that the buffer server can determine a first target service address associated with the first target service identifier according to the service identifier, the service address and the association relation between the service identifier and the service address cached in the buffer area. Since the data is read from the buffer (memory) many times faster than the data is read from the database, the first target service address can be determined more quickly.
Step 409, reading N first messages in the first memory buffer queue by using N second threads, and sending N first request messages to the target server corresponding to the first target service address, so as to call the service corresponding to the target server for N times to process the N first messages.
The method comprises the steps of sending N first request messages to a target server corresponding to a first target service address to call services corresponding to the target server for N times to process the N first messages, replying a failure response message of a first request message to a buffer server under the condition that the service corresponding to the target server fails to process the first message included in the first request message, and replying a success response message of the first request message to the buffer server under the condition that the service corresponding to the target server succeeds in processing the first message included in the first request message.
It should be noted that, after the target server receives the N first request messages, for example, the target server is the server 1 shown in fig. 2, the server 1 may determine, according to a load balancing algorithm, whether to process the N first messages by using the service deployed on the service server 1 or to process the N first messages by using the service deployed on the service server 2, so as to implement load balancing. For example, in the prior art, a production server publishes a produced message to a message queue in a production server cluster, and a consumption server fetches the message from the message queue and processes the message. Currently, a single production server cluster is provided with a plurality of message queues of different service scenarios (for example, a user corresponds to one message queue for a praise message of a video, and a user corresponds to one message queue for a comment message of a video), and a consumption server fixedly consumes messages from one message queue or a plurality of message queues, and if the number of messages in one message queue corresponding to a certain consumption server increases, and at this time, a consumption service deployed on the consumption server dies or the processing pressure is too large, the consumption service consumes a long time for processing messages, thereby causing the backlog of messages in the message queues. However, other consumption servers capable of consuming messages do not receive the messages, which causes the problem of unbalanced load of each consumption server. In this embodiment, the target server determines which service deployed on the service server is used to process the N first messages according to a load balancing algorithm, so that load balancing can be achieved by the target server in this embodiment, and the problems that, in the prior art, if a consumption server monitors a message in a message queue, the message needs to be taken out of the message queue and processed, and message backlog and load imbalance in the message queue are caused are avoided.
Specifically, after sending N first request messages to the target server corresponding to the target service address, the method may further include the following steps:
judging whether a response message of a target request message in the N first request messages can be received within preset time;
under the condition that a response message is received within the preset time, judging whether the response message is a failure response message or not;
and in the case that the response message is a failure response message, periodically retransmitting the target request message to the target server.
In the embodiment of the invention, the periodic retransmission target request message can be directly triggered under the condition that the received response message is the failure response message.
Optionally, the method may further include the following steps:
and under the condition that the response message is not received within the preset time, periodically retransmitting the target request message to the target server.
In the embodiment of the invention, the periodical retransmission of the target request message can be directly triggered under the condition that the response message is not received within the preset time.
Optionally, the method may further include the following steps:
under the condition that the retransmission times of the target request message is larger than a first preset threshold value, recording a first message in the target request message and a message identifier of the first message in the target request message in a log;
and acquiring at least one second message from the log, and periodically retransmitting the second message to the target server, wherein the second message is the first message recorded in the log.
When the number of retransmissions for retransmitting the target request message is greater than a first preset threshold (the first preset threshold is, for example, equal to 3), the first message in the target request message and the message identifier of the first message in the target request message are recorded in the log, and in a subsequent step, the message may be obtained from the log and retransmitted again.
Optionally, the method may further include the following steps:
and outputting alarm information under the condition that the retransmission times of the second message is greater than or equal to a second preset threshold value.
The second preset threshold may be equal to or different from the first preset threshold.
It should be noted that, by retransmitting the message in the log, the first message in the log can be reprocessed, each time processing is performed, the retry number is increased by one (the retry number initial value is set to 0), and when the retry number is greater than or equal to the preset threshold value, the alarm information is output, so that human intervention processing can be performed according to the alarm information to investigate the reason of the message processing failure. For example, 3 second messages in the second memory buffer queue are read, and if the 3 second messages include message 1, message 2, and message 3, message 1 corresponds to second request message 1, message 2 corresponds to second request message 2, and message 3 corresponds to second request message 3. And if the successful response message of the second request message 3 is not received within the second preset time after the first retry, adding 1 to the number of retries of the message 3 corresponding to the second request message 3, wherein the initial value of the number of retries of the message 3 is equal to 0, and if the successful response message of the second request message 3 is not received within the second preset time after the first retry, adding 1 to the number of retries of the message 3 and then equal to 1. And then continuing to record the message 3 in the second log, and continuing to take out the message 3 for processing, wherein if the message fails after retrying again, the number of retries is accumulated to 2. If the preset threshold value is 3, retry can be carried out again, and if retry fails again, alarm information can be output to artificially analyze the reason of failure in processing the message 3.
Optionally, the method may further include the following steps:
under the condition that no response message is received within preset time or the received response message is a failure response message, stopping calling of a service corresponding to the target request message, and recording a first message in the target request message and a message identifier of the first message in the target request message in a log;
acquiring at least one third message from the log, and periodically retransmitting the third message to the target server, wherein the third message is the first message recorded in the log;
and under the condition that the retransmission times of the third message is greater than or equal to a third preset threshold value, alarm information is output.
In summary, in a case that the response message is a successful response message, the first message in the target request message and the message identifier of the first message in the target request message may be recorded in a log dedicated to record the successful message, where the log is, for example, a first log; in the case that the response message is a failure response message, the first message in the target request message and the message identifier of the first message in the target request message are recorded in a log dedicated to recording the failure message, for example, the log is a second log.
For example, the N first request messages include a first request message 1 and a first request message 2, the first request message 1 includes the first message 1, and the first request message 2 includes the first message 2. For the first request message 1, when a service corresponding to a target server is called once to process the first message 1, if a response message 1 corresponding to the first request message 1 is received within a second preset time, and when the response message 1 is a successful response message, recording the first message 1 and the identifier of the first message 1 in a first log; when the response message 1 is not a success response message (i.e. the response message is a failure response message), then the first message 1 and the identity of the first message 1 are recorded in the second log.
It should be noted that, in the prior art, an address of a storage server needs to be configured in a local configuration file of a consumption server, and a message queue identifier corresponding to a message queue of the storage server needs to be configured, and the consumption server takes out a message from the message queue corresponding to the message queue identifier of the storage server according to the local configuration file, and processes the message. Under the condition that the consumption servers fail to process the messages, the consumption servers record the messages failed to process in the abnormal message logs, and under the condition that the number of the consumption servers is large, each consumption server maintains one abnormal message log, so that the maintenance cost is high, and each business team is inconvenient to obtain the abnormal message logs.
In this embodiment, the buffering server reads N first messages in the first memory buffering queue, and sends N first request messages to the target server corresponding to the first target service address, so as to invoke the service corresponding to the target server to process the N first messages for N times, and records the first message in the target request message and the message identifier of the first message in the target request message in the second log when the response message of the target request message sent to the buffering server by the service corresponding to the target server is a failure response message. That is, the first message with processing failure is recorded in the local second log by the buffering server, and the message with processing failure is managed by the buffering server in a unified way, but not by the service corresponding to the target server to maintain the message with processing failure, so that the maintenance cost is reduced to a certain extent. And the responsible person of each service team can conveniently obtain the processing failure message from the second log of the buffer server. Meanwhile, the successfully processed first message is recorded, that is, the first message in the successfully processed target request message is recorded in the first log, so that the successfully processed first message is convenient to query.
It should be noted that, when the response message is not received within the preset time or the received response message is a failure response message, the call of the service corresponding to the target request message is stopped, and the first message in the target request message and the message identifier of the first message in the target request message are recorded in a log, for example, a second log.
For example, for the first request message 1, when the service corresponding to the target server is called once to process the first message 1, if the response message corresponding to the first request message 1 is not received within the second preset time, the calling of the service corresponding to the target request message (the target request message is the first request message 1) is stopped, and the identifiers of the first message 1 and the first message 1 are recorded in the second log. And for the first request message 2, when the service corresponding to the target server is called once to process the first message 2, if the response message 2 corresponding to the first request message 1 is received within the preset time and the response message 2 is a successful response message, recording the first message 2 and the identifier of the first message 2 in the first log.
In the prior art, when a consumption server takes out a message and processes the message, if the consumption server is restarted, a consumption service deployed on the consumption server is killed and a new version is reloaded, the processed message is lost, and even if the message failed in processing is recorded in an abnormal message log, the message failed in processing is not easily found from the abnormal message log because the time point of restarting the consumption server is not recorded.
In this embodiment, when the service corresponding to the target server (for example, the service on the service server 1) is restarted, and the buffer server does not receive a response message of a certain first request message within a preset time, the buffer server records the first message in the first request message in the second log, so that the unprocessed message can be conveniently found through the second log. And, by stopping the invocation of the service corresponding to the target request message, it is possible to avoid that when the service corresponding to the target request message processes the first message in the target request message, since it takes a long time to process the first message due to the resource shortage of the service, or it takes a long time to receive the response message of the target request message due to the false dead time of the service, therefore, in case that the response message is not received within the preset time or the failure response message is received, the invocation of the service corresponding to the target request message may be stopped, and recording a first message of the target request messages and a message identification of the first message of the target request messages in a log, therefore, the longer time for the buffering server to wait for receiving the response message can be avoided, and the efficiency of continuously taking out the first message from the first memory buffering queue is influenced.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a message processing apparatus provided in an embodiment of the present invention, where the apparatus 500 is disposed in a cache server, and includes:
a receiving module 510, configured to receive a first message sent by a production server, and store the first message in a message queue corresponding to a service type of the first message;
an obtaining module 520, configured to obtain M first messages from the message queue, and write the M first messages into a first memory buffer queue corresponding to a first target service identifier by using M first threads, where the first target service identifier is an identifier corresponding to a service type of the first message, and M is an integer greater than or equal to 1;
a determining module 530, configured to determine a first target service address associated with the first target service identifier;
a sending module 540, configured to read N first messages in the first memory buffer queue by using N second threads, and send N first request messages to a target server corresponding to the first target service address, so as to call a service corresponding to the target server for N times to process the N first messages, where each first request message includes one first message, and N is an integer greater than M.
The message processing apparatus provided in the embodiment of the present invention receives a first message sent by a production server, stores the first message in a message queue corresponding to a service type of the first message, obtains M first messages from the message queue, writes the M first messages into a first memory buffer queue corresponding to a first target service identifier by using M first threads, determines a first target service address associated with the first target service identifier, reads N first messages in the first memory buffer queue by using N second threads, and sends N first request messages to a target server corresponding to the first target service address to invoke a service corresponding to the target server N times to process the N first messages. Since N is greater than M, that is, the number of second threads reading the first message from the first memory buffer queue is greater than the number of first threads writing the first message into the first memory buffer queue, so that a faster reading operation can be achieved, that is, the speed of reading the first message from the first memory buffer queue is greater than the speed of writing the first message into the first memory buffer queue, thereby avoiding backlog of the first message in the first memory buffer queue, and further ensuring that the buffer server can continuously obtain the first message from the message queue and write the obtained first message into the first memory buffer queue. And the buffer server is not responsible for processing the first message, but the service corresponding to the target server processes the first message, that is, after the buffer server sends N first request messages to the target server, the buffer server may then take the first message from the first memory buffer queue without waiting for whether the target server has processed the first message. Therefore, the consumption efficiency of the messages in the message queue can be further improved to a certain extent.
Optionally, the method further includes:
the cache module is used for periodically acquiring the association relationship between the service identifier and the service address and caching the association relationship between the service identifier and the service address in a buffer area, wherein each service identifier is associated with a different service address;
the determining module 530 is specifically configured to determine, according to the association relationship between the service identifier and the service address cached in the buffer, a first target service address associated with the first target service identifier.
Optionally, the method further includes:
the first judgment module is used for judging whether a response message of a target request message in the N first request messages can be received within preset time;
the second judging module is used for judging whether the response message is a failure response message or not under the condition that the response message is received within the preset time;
a first retransmission module, configured to retransmit the target request message to the target server periodically if the response message is a failure response message.
Optionally, the first retransmission module is further configured to periodically retransmit the target request message to the target server when the response message is not received within the preset time.
Optionally, the method further includes:
the first recording module is used for recording a first message in the target request message and a message identifier of the first message in the target request message in a log under the condition that the retransmission times of the target request message is larger than a first preset threshold value;
and the second retransmission module is used for acquiring at least one second message from the log and periodically retransmitting the second message to the target server, wherein the second message is the first message recorded in the log.
Optionally, the second retransmission module is further configured to output alarm information when the retransmission frequency of the second message is greater than or equal to a second preset threshold.
Optionally, the method further includes:
a second recording module, configured to stop, when the response message is not received within a preset time or the received response message is a failure response message, invoking a service corresponding to the target request message, and record the first message in the target request message and a message identifier of the first message in the target request message in a log;
a third retransmission module, configured to obtain at least one third message from the log and periodically retransmit the third message to the target server, where the third message is a first message recorded in the log; and under the condition that the retransmission times of the third message is greater than or equal to a third preset threshold value, outputting alarm information.
Fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention, and includes a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete mutual communication through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to implement the following steps when executing the program stored in the memory 603:
receiving a first message sent by a production server, and storing the first message in a message queue corresponding to the service type of the first message;
acquiring M first messages from the message queue, and writing the M first messages into a first memory buffer queue corresponding to a first target service identifier by adopting M first threads, wherein the first target service identifier is an identifier corresponding to the service type of the first message, and M is an integer greater than or equal to 1;
determining a first target service address associated with the first target service identifier;
and reading N first messages in the first memory buffer queue by adopting N second threads, and sending N first request messages to a target server corresponding to the first target service address so as to call a service corresponding to the target server for N times to process the N first messages, wherein each first request message comprises one first message, and N is an integer larger than M.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, which has instructions stored therein, and when the instructions are executed on a computer, the instructions cause the computer to execute the message processing method described in any of the above embodiments.
In yet another embodiment, the present invention further provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the message processing method described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A message processing method, implemented in a caching server, comprising:
receiving a first message sent by a production server, and storing the first message in a message queue corresponding to the service type of the first message;
acquiring M first messages from the message queue, and writing the M first messages into a first memory buffer queue corresponding to a first target service identifier by adopting M first threads, wherein the first target service identifier is an identifier corresponding to the service type of the first message, and M is an integer greater than or equal to 1;
determining a first target service address associated with the first target service identifier;
and reading N first messages in the first memory buffer queue by adopting N second threads, and sending N first request messages to a target server corresponding to the first target service address so as to call a service corresponding to the target server for N times to process the N first messages, wherein each first request message comprises one first message, and N is an integer larger than M.
2. The method of claim 1, further comprising:
the method comprises the steps of periodically obtaining the incidence relation between service identification and service addresses, and caching the incidence relation between the service identification and the service addresses in a buffer area, wherein each service identification is associated with a different service address;
the determining a first target service address associated with the first target service identifier includes:
and determining a first target service address associated with the first target service identifier according to the association relationship between the service identifier and the service address cached in the buffer area.
3. The method of claim 1, further comprising, after sending the N first request messages to the target server corresponding to the target service address:
judging whether a response message of a target request message in the N first request messages can be received within preset time;
under the condition that the response message is received within the preset time, judging whether the response message is a failure response message or not;
and under the condition that the response message is a failure response message, periodically retransmitting the target request message to the target server.
4. The method of claim 3, further comprising:
and under the condition that the response message is not received within the preset time, periodically retransmitting the target request message to the target server.
5. The method of claim 3 or 4, further comprising:
under the condition that the retransmission times of retransmitting the target request messages are larger than a first preset threshold value, recording a first message in the target request messages and message identifications of the first message in the target request messages in a log;
and acquiring at least one second message from the log, and periodically retransmitting the second message to the target server, wherein the second message is the first message recorded in the log.
6. The method of claim 5, further comprising:
and under the condition that the retransmission times of retransmitting the second message is greater than or equal to a second preset threshold value, outputting alarm information.
7. The method of claim 3, further comprising:
stopping the calling of the service corresponding to the target request message under the condition that the response message is not received or the received response message is a failure response message within preset time, and recording a first message in the target request message and a message identifier of the first message in the target request message in a log;
acquiring at least one third message from the log, and periodically retransmitting the third message to the target server, wherein the third message is the first message recorded in the log;
and under the condition that the retransmission times of the third message is greater than or equal to a third preset threshold value, outputting alarm information.
8. A message processing apparatus provided in a buffering server, comprising:
the receiving module is used for receiving a first message sent by a production server and storing the first message in a message queue corresponding to the service type of the first message;
an obtaining module, configured to obtain M first messages from the message queue, and write the M first messages into a first memory buffer queue corresponding to a first target service identifier by using M first threads, where the first target service identifier is an identifier corresponding to a service type of the first message, and M is an integer greater than or equal to 1;
a determining module, configured to determine a first target service address associated with the first target service identifier;
a sending module, configured to read N first messages in the first memory buffer queue by using N second threads, and send N first request messages to a target server corresponding to the first target service address, so as to call a service corresponding to the target server for N times to process the N first messages, where each first request message includes one first message, and N is an integer greater than M.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202010246268.9A 2020-03-31 2020-03-31 Message processing method and device, electronic equipment and readable storage medium Active CN111585867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010246268.9A CN111585867B (en) 2020-03-31 2020-03-31 Message processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010246268.9A CN111585867B (en) 2020-03-31 2020-03-31 Message processing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111585867A CN111585867A (en) 2020-08-25
CN111585867B true CN111585867B (en) 2022-04-19

Family

ID=72124257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010246268.9A Active CN111585867B (en) 2020-03-31 2020-03-31 Message processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111585867B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149995A (en) * 2020-09-22 2020-12-29 京东数字科技控股股份有限公司 Task checking method and device, electronic equipment and storage medium
CN112256451A (en) * 2020-10-19 2021-01-22 北京达佳互联信息技术有限公司 Timing service message generation method and device, electronic equipment and storage medium
CN112798090B (en) * 2020-12-29 2022-08-05 广东省科学院智能制造研究所 Method and device for continuously and stably weighing materials
CN112346892A (en) * 2021-01-06 2021-02-09 全时云商务服务股份有限公司 MQ load balancing method, device, equipment and storage medium
CN113515391A (en) * 2021-05-14 2021-10-19 北京字节跳动网络技术有限公司 Message processing method and device, electronic equipment and computer readable storage medium
CN113342764A (en) * 2021-06-12 2021-09-03 四川虹美智能科技有限公司 Data synchronization method and device among different cloud servers
CN113467969B (en) * 2021-06-22 2024-01-23 上海星融汽车科技有限公司 Method for processing message accumulation
CN115914346A (en) * 2021-08-09 2023-04-04 中移物联网有限公司 Internet of things message processing method and device, electronic equipment and storage medium
CN114463930B (en) * 2021-12-27 2024-04-16 北京中交兴路信息科技有限公司 Alarm event processing method and device, electronic equipment and medium
CN115002219B (en) * 2022-05-30 2023-07-25 广州市百果园网络科技有限公司 Service calling method, device, equipment, system, storage medium and product
CN116126563A (en) * 2023-02-20 2023-05-16 北京神州云合数据科技发展有限公司 Message processing method, device, equipment and medium based on event bus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107656825A (en) * 2017-09-01 2018-02-02 上海艾融软件股份有限公司 Message treatment method, apparatus and system
CN108712501A (en) * 2018-05-28 2018-10-26 腾讯科技(北京)有限公司 Sending method, device, computing device and the storage medium of information
CN108874562A (en) * 2018-06-21 2018-11-23 北京顺丰同城科技有限公司 Distributed high concurrent message queue supplying system
CN109495308A (en) * 2018-11-27 2019-03-19 中国电子科技集团公司第二十八研究所 A kind of automation operational system based on management information system
CN110535787A (en) * 2019-07-25 2019-12-03 北京奇艺世纪科技有限公司 Information consumption method, apparatus and readable storage medium storing program for executing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2869233C (en) * 2013-11-01 2022-07-19 Blair LIVINGSTON System and method for distribution and consumption of content
CN110321273B (en) * 2019-07-09 2023-10-03 政采云有限公司 Service statistics method and device
CN110505162B (en) * 2019-08-08 2022-07-26 腾讯科技(深圳)有限公司 Message transmission method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107656825A (en) * 2017-09-01 2018-02-02 上海艾融软件股份有限公司 Message treatment method, apparatus and system
CN108712501A (en) * 2018-05-28 2018-10-26 腾讯科技(北京)有限公司 Sending method, device, computing device and the storage medium of information
CN108874562A (en) * 2018-06-21 2018-11-23 北京顺丰同城科技有限公司 Distributed high concurrent message queue supplying system
CN109495308A (en) * 2018-11-27 2019-03-19 中国电子科技集团公司第二十八研究所 A kind of automation operational system based on management information system
CN110535787A (en) * 2019-07-25 2019-12-03 北京奇艺世纪科技有限公司 Information consumption method, apparatus and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN111585867A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111585867B (en) Message processing method and device, electronic equipment and readable storage medium
WO2020181810A1 (en) Data processing method and apparatus applied to multi-level caching in cluster
KR102167613B1 (en) Message push method and device
CN110727560A (en) Cloud service alarm method and device
CN110851290A (en) Data synchronization method and device, electronic equipment and storage medium
CN111147310A (en) Log tracking processing method, device, server and medium
CN110430070B (en) Service state analysis method, device, server, data analysis equipment and medium
CN111414263A (en) Information processing method, device, server and storage medium
CN110311975B (en) Data request processing method and device
CN111782431A (en) Exception processing method, exception processing device, terminal and storage medium
CN111382206A (en) Data storage method and device
CN110955581A (en) Online software abnormity warning method and device, electronic equipment and storage medium
CN114218046A (en) Business monitoring method, medium, electronic device and readable storage medium
CN112653736B (en) Parallel source returning method and device and electronic equipment
CN112865927B (en) Message delivery verification method, device, computer equipment and storage medium
CN108390770B (en) Information generation method and device and server
CN116155539A (en) Automatic penetration test method, system, equipment and storage medium based on information flow asynchronous processing algorithm
CN112671590B (en) Data transmission method and device, electronic equipment and computer storage medium
CN114510398A (en) Anomaly monitoring method, apparatus, device, system and medium
CN114090293A (en) Service providing method and electronic equipment
CN113220342A (en) Centralized configuration method and device, electronic equipment and storage medium
CN110113187B (en) Configuration updating method and device, configuration server and configuration system
CN111291127A (en) Data synchronization method, device, server and storage medium
CN112463514A (en) Monitoring method and device for distributed cache cluster
CN111163088B (en) Message processing method, system and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant