CN114610504A - Message processing method and device, electronic equipment and storage medium - Google Patents

Message processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114610504A
CN114610504A CN202011450058.8A CN202011450058A CN114610504A CN 114610504 A CN114610504 A CN 114610504A CN 202011450058 A CN202011450058 A CN 202011450058A CN 114610504 A CN114610504 A CN 114610504A
Authority
CN
China
Prior art keywords
message
queue
messages
offset
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011450058.8A
Other languages
Chinese (zh)
Inventor
赵冠博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011450058.8A priority Critical patent/CN114610504A/en
Publication of CN114610504A publication Critical patent/CN114610504A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Abstract

The application provides a message processing method, a message processing device, an electronic device and a storage medium, relates to the technical field of computers, and is used for ensuring the sequentiality and concurrency of message processing. The method comprises the following steps: continuously acquiring each message generated by a message source from at least one message queue, and respectively storing each acquired message into at least one target queue once the message is acquired; the messages with the same identification in each message are stored in the same target queue, and the messages with the same identification in one target queue are stored according to the arrangement sequence of the messages with the same identification in the corresponding message queue; aiming at each target queue in at least one target queue, respectively adopting a processing thread to process each message in the corresponding target queue according to the sequence stored by the messages to obtain a message processing record corresponding to each message; and obtaining a message processing result based on the obtained message processing records.

Description

Message processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a message processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of internet technology, the number of messages generated by a message source grows exponentially, and a message processing end needs to process a large number of messages generated by the message source. Currently, when a message processing end acquires a large amount of messages generated by a message source, the messages are generally stored in a plurality of message queues respectively, and then when the messages are processed, a batch of messages are pulled from the plurality of message queues for processing.
For a batch of pulled messages, a plurality of threads are adopted to process simultaneously, and a next batch of messages are processed after the batch of messages are processed. However, some messages of the same type generated by the message source may be ordered, for example, messages of the same account or messages of the same device, etc., when multiple threads are used for processing simultaneously, the order of ordered messages cannot be guaranteed for processing, and if a certain thread is stuck, other threads may wait, and the concurrency is poor.
Disclosure of Invention
The embodiment of the application provides a message processing method, a message processing device, an electronic device and a storage medium, which are used for ensuring the sequence and concurrency of message processing.
In a first aspect, an embodiment of the present application provides a message processing method, including:
continuously acquiring each message generated by a message source from at least one message queue, and respectively storing each acquired message into at least one target queue once the message is acquired; wherein, the messages with the same identification in each message are stored in the same target queue, and each message with the same identification in one target queue is stored according to the arrangement sequence of each message with the same identification in the corresponding message queue;
aiming at each target queue in the at least one target queue, respectively adopting a processing thread to process each message in the corresponding target queue according to the sequence stored by the messages to obtain a message processing record corresponding to each message;
and obtaining a message processing result based on the obtained message processing records.
In a second aspect, an embodiment of the present application provides a message processing apparatus, including:
the storage module is used for continuously acquiring each message generated by a message source from at least one message queue, and storing each acquired message into at least one target queue respectively every time each message is acquired; wherein, the messages with the same identification in each message are stored in the same target queue, and each message with the same identification in one target queue is stored according to the arrangement sequence of each message with the same identification in the corresponding message queue;
the processing module is used for processing each message in the corresponding target queue by adopting a processing thread according to the sequence stored by the messages respectively aiming at each target queue in the at least one target queue to obtain a message processing record corresponding to each message;
and the acquisition module is used for acquiring a message processing result based on the acquired message processing records.
In a possible embodiment, each message obtained from at least one message queue carries a message queue identifier, and the apparatus further includes:
a determining module, configured to determine, for a message processing record corresponding to each currently processed message, a message processing record of each message from a corresponding message queue according to a message queue identifier carried by each processed message;
and the sequencing module is used for sequencing the message processing records of each message in the corresponding message queue according to the sequence of each message in the message queue to obtain the sequenced message processing records corresponding to the corresponding message queue.
In a possible embodiment, the message processing record includes offsets of corresponding messages in corresponding message queues, and the offset of each message obtained from the same message queue is continuously increased, the apparatus further includes:
a determining module, configured to determine, for a corresponding message queue, an offset sequence according to an offset in an arranged message processing record corresponding to the message queue;
and the recording module is used for determining the maximum offset in the first continuous offset quantum sequence in the offset sequence and recording the message corresponding to the maximum offset processed in the corresponding message queue.
In a possible embodiment, the apparatus further comprises:
and the deleting module is used for deleting the message processing records corresponding to the offset except the maximum offset in the first continuous offset quantum sequence.
In a possible embodiment, the apparatus further comprises:
the alarm module is used for alarming if the maximum offset in the offset sequence changes and the maximum offset in the first continuous offset quantum sequence does not change within a set time; alternatively, the first and second electrodes may be,
if the maximum offset in the offset sequence is increased by a set value and the maximum offset in the first continuous offset quantum sequence is not changed, alarming; alternatively, the first and second electrodes may be,
and if the maximum offset in the first continuous offset quantum sequence is not changed and the difference value between the maximum offset in the offset sequence and the maximum offset in the first continuous offset quantum sequence reaches a set threshold value, alarming.
In a possible embodiment, the saving module is further configured to:
respectively carrying out hash operation on the identification of each message to obtain a corresponding hash value;
and respectively storing the messages into the target queues associated with the corresponding hash values.
In a third aspect, an embodiment of the present application provides a computer device, which includes a processor and a memory, where the memory stores program code, and when the program code is executed by the processor, the processor is caused to execute the steps of the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the steps of the method of the first aspect.
Due to the adoption of the technical scheme, the embodiment of the application has at least the following technical effects:
the method comprises the steps of continuously acquiring messages generated by a message source from a plurality of message queues, storing the acquired messages into a plurality of target queues respectively every time the messages are acquired, and storing the messages with the same identification in the same target queue according to the arrangement sequence of the messages in the corresponding message queue, wherein the messages with the same identification can be messages of the same type, so that when the messages with the same identification are ordered messages, the ordered messages can keep the order. And for each target queue, processing each message in the target queue by adopting a processing thread according to the sequence stored by the messages, so that the ordered messages can be processed in sequence to ensure the processing sequence.
In addition, because the plurality of target queues are processed by one processing thread respectively, the plurality of processing threads process messages concurrently, and one processing thread does not need to wait for other processing threads to finish processing, but can process messages continuously. Therefore, the concurrency of the embodiment of the application is better compared with a mode of processing the next batch of messages after the batch of messages are pulled and processed in the related technology. Therefore, the method and the device for processing the ordered messages can ensure the processing sequentiality of the ordered messages and have good concurrency.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flow chart of message processing in the related art according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a blockchain system according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a block structure according to an embodiment of the present disclosure;
fig. 4 is a schematic view of an application scenario of a message processing method according to an embodiment of the present application;
fig. 5 is a flowchart of a message processing method according to an embodiment of the present application;
fig. 6 is a flowchart of a message processing method of a Kafka-based message processing system according to an embodiment of the present application;
fig. 7 is a schematic diagram of a message processing process of a Kafka-based message processing system according to an embodiment of the present application;
fig. 8 is a schematic processing diagram of an offset processing module in a Kafka-based message processing system according to an embodiment of the present application;
fig. 9a is a schematic diagram of a message processing process after a downtime restart in the related art according to the embodiment of the present application;
fig. 9b is a schematic diagram of another message processing process after being restarted after downtime according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a message processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
To facilitate better understanding of the technical solutions of the present application for those skilled in the art, the following terms related to the present application are introduced.
1. Cloud technology (Cloud technology): the management method is a management technology for unifying series resources such as hardware, software, network and the like in a wide area network or a local area network to realize the calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied in the cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing. The servers related in the embodiments of the present application may all be implemented by a cloud technology.
2. A message source: refers to a device that generates messages, such as a terminal device, that can generate messages related to a service when processing the service.
3. Message queue: refers to a container that holds messages during their transmission. The message queue in the embodiment of the present application may be deployed in a server, and is used for caching messages.
4. Marking: for example, the identifier of a certain message may be a device identifier of a terminal device that generates the message, an identifier of an account that logs in the terminal device, or a service identifier corresponding to the message, and may be specifically set according to needs. Therefore, the different types of messages may be messages of different accounts, messages of different devices, or messages of different services, and the like, which is not limited in the present application.
5. Thread: is the smallest unit that the operating system can perform operation scheduling. It is included in the process and is the actual unit of operation in the process. A thread refers to a single sequential control flow in a process, multiple threads can be concurrently executed in a process, and each thread executes different tasks in parallel.
6. Shutdown: it is a situation that the computer can not provide service for some reason, and the computing power of the computer is lost.
7. Kafka: is an open source stream processing platform developed by the Apache software foundation and written by Scala and Java. Kafka is a high-throughput distributed publish-subscribe messaging system that can handle all the action flow data of a consumer in a web site. The purpose of Kafka is to unify online and offline message processing through the parallel loading mechanism of Hadoop, and also to provide real-time messages through clustering.
8. Offset amount: the Kafka message log is used to mark the location of the read message. The position of each message acquired from at least one message queue can be understood in the embodiment of the application.
The word "exemplary" is used hereinafter to mean "serving as an example, embodiment, or illustration. Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The terms "first" and "second" are used herein for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of this application, a "plurality" means two or more unless indicated otherwise.
The related art is described below:
the following describes an example of a message processing procedure in the related art, taking a procedure in which a server processes a large number of messages generated by a terminal device as an example:
for a large amount of messages generated by the terminal equipment, the server firstly caches the messages, then processes the messages by adopting the processing process shown in fig. 1, pulls a batch of messages from the cache by pulling a data thread, for the batch of pulled messages, processes the batch of pulled messages by adopting a plurality of threads such as thread 1, thread 2 … …, thread n and the like, and pulls the next batch of messages to perform the same processing after all the threads finish processing the batch of messages. However, some messages of the same type generated by the terminal device may be ordered, and when a plurality of threads are used for processing simultaneously, it cannot be guaranteed that the messages are processed according to the order of the ordered messages, and if a certain thread is stuck, other threads are caused to wait, and the concurrency is poor.
In view of this, embodiments of the present application provide a message processing method, and a design concept of the method is introduced below.
In this embodiment of the present application, after acquiring a large number of messages generated by a message source, a message processing end may cache the messages in a plurality of message queues, specifically, messages with the same identifier are stored in the same message queue according to an acquisition sequence, where the messages with the same identifier may be messages of the same type, and when processing a message, the message processing end may acquire messages from one batch of the plurality of message queues, and each acquired message is split twice each time the message is acquired, that is, the acquired messages are stored in a plurality of target queues respectively, and the messages with the same identifier in each message are stored in the same target queue according to an arrangement sequence of the message with the same identifier in the corresponding message queue.
And for each target queue, processing each message in the target queue by adopting a processing thread according to the sequence stored by the messages, so that the ordered messages can be processed in sequence to ensure the processing sequence. In addition, because the plurality of target queues adopt one processing thread to process respectively, the plurality of processing threads process the messages concurrently, one processing thread does not need to wait for other processing threads, but can process the messages continuously, and the concurrency is good. Therefore, the method and the device for processing the ordered messages can ensure the processing sequentiality of the ordered messages and have good concurrency.
In some embodiments, the above-mentioned respective message queues may be understood as respective tiles in a blockchain system. Referring To fig. 2, fig. 2 is an alternative structural diagram of a blockchain system 400 provided in this embodiment of the present application, which is formed by a plurality of nodes 401 (any form of computing devices in an access network, such as servers and user terminals) and a client 402, and a Peer-To-Peer (P2P, Peer To Peer) network is formed between the nodes, and the P2P Protocol is an application layer Protocol operating on top of a Transmission Control Protocol (TCP). In a distributed system, any machine, such as a server or a terminal, can join to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer.
Referring to the functions of each node in the blockchain system shown in fig. 2, the functions involved include:
1) routing, the basic function a node has for supporting communication between nodes.
Besides the routing function, the node may also have the following functions:
2) the application is used for being deployed in a block chain, realizing specific services according to actual service requirements, recording data related to the realization functions to form recording data, carrying a digital signature in the recording data to represent a source of task data, and sending the recording data to other nodes in the block chain system, so that the other nodes add the recording data to a temporary block when the source and integrity of the recording data are verified successfully.
For example, the services implemented by the application include:
2.1) wallet, for providing the function of transaction of electronic money, including initiating transaction (i.e. sending the transaction record of current transaction to other nodes in the blockchain system, after the other nodes are successfully verified, storing the record data of transaction in the temporary blocks of the blockchain as the response of confirming the transaction is valid; of course, the wallet also supports the querying of the remaining electronic money in the electronic money address;
and 2.2) sharing the account book, wherein the shared account book is used for providing functions of operations such as storage, query and modification of account data, record data of the operations on the account data are sent to other nodes in the block chain system, and after the other nodes verify the validity, the record data are stored in a temporary block as a response for acknowledging that the account data are valid, and confirmation can be sent to the node initiating the operations.
2.3) Intelligent contracts, computerized agreements, which can enforce the terms of a contract, implemented by codes deployed on a shared ledger for execution when certain conditions are met, for completing automated transactions according to actual business requirement codes, such as querying the logistics status of goods purchased by a buyer, transferring the buyer's electronic money to the merchant's address after the buyer signs for the goods; of course, smart contracts are not limited to executing contracts for trading, but may also execute contracts that process received information.
3) And the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and recorded data submitted by nodes in the Block chain system are recorded in the blocks.
Referring to fig. 3, fig. 3 is an optional schematic diagram of a Block Structure (Block Structure) provided in this embodiment, each Block includes a hash value of a transaction record stored in the Block (hash value of the Block) and a hash value of a previous Block, and the blocks are connected by the hash values to form a Block chain. The block may also include information such as a time stamp at the time of block generation. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using cryptography, and each data block contains related information for verifying the validity (anti-counterfeiting) of the information and generating a next block.
Based on the above design concept, an application scenario of the message processing method according to the embodiment of the present application is described below.
In the following, a device that may be involved in an application scenario of the embodiment of the present application is described with reference to a scenario diagram of a message processing method shown in fig. 1.
Referring to fig. 4, the application scenario includes a terminal device 100, a cache server 200, and a processing server 300. The terminal device 100 and the cache server 200 may communicate with each other via a communication network, and the cache server 200 and the processing server 300 may communicate with each other via a communication network. Alternatively, the communication network may be a wired network or a wireless network. The terminal device 100 and the cache server 200, and the cache server 200 and the processing server 300 may be directly or indirectly connected through wired or wireless communication, and the present application is not limited thereto.
The terminal device 200 may be, for example, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, an in-vehicle terminal device, and the like, but is not limited thereto. The terminal device 110 may perform various services through a client, which may be a web page version client, an application pre-installed in the terminal device 200, or a sub-application embedded in a third party application, etc. The terminal device 200 may generate a service-related message when any one of the services is executed through the client.
The cache server 200 and the processing server 300 may be a server or a server cluster composed of a plurality of servers, may be a virtual server or an entity server, and may also be cloud servers providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), big data, an artificial intelligence platform, and the like.
In a specific implementation, the cache server 200 may obtain the messages generated by the terminal device 100, and store the obtained messages into a plurality of message queues, specifically, the messages with the same identifier may be stored in the same message queue according to the obtained sequence. The message may be sent by the terminal device 100 to the cache server 200, or may be obtained by the cache server 200 from the terminal device 100, which is not limited in this application. Further, the processing server 300 may be used as a message processing end to process a batch of acquired messages from a plurality of message queues in the caching server 200, where the related message processing process will be described below.
It should be noted that fig. 4 is an exemplary description of an application scenario of the message processing method, and an application scenario to which the method in the embodiment of the present application may be applied is not limited to this. For example, the plurality of message queues may also be deployed in the processing server 300, that is, the message processing side may only include the processing server 300, which is not limited in this embodiment of the present application.
In one possible implementation scenario, the cache server 200 and the processing server 300 may constitute a Kafka-based message processing system. The Kafka is a high-throughput distributed publish-subscribe message system, and the Kafka-based message processing system may include a message production end, a Kafka cluster composed of a plurality of servers, and a consumption end. The message producer and Kafka clusters may be understood as the cache server 200 described above, and the consumer may be understood as the processing server 300 described above. The message generating end may obtain a message generated by a message source, and push the message to one or more servers in the Kafka cluster, and specifically, may push the message with the same identifier to the same server, where each server may deploy a queue fragment (i.e., the message queue), each server caches the obtained message according to the obtained sequence, and the consuming end may pull the subscribed message from one or more servers in the Kafka cluster, and further consume the pulled message, which may be understood as to continuously obtain each message generated by the message source from at least one message queue, and process the obtained message.
In practical application, the message processing method of the embodiment of the application can be applied to scenes with high requirements on sequentiality and concurrency, such as voting, payment and approval.
For example, in a live scene, a large number of viewers may vote for a main broadcast through a terminal device (i.e., a viewer side), a large number of voting data (i.e., a message) may be generated, each viewer side may send the voting data to a cache server, and the same viewer may cancel the voting after the voting and then may vote again, at this time, a plurality of voting data of the same viewer are ordered. In addition, in a live scene, a large number of viewers can also approve the anchor broadcast through respective viewer ends, which is similar to the voting process described above and is not described herein again. Therefore, the requirement of the scenario on the sequence and the concurrency of the messages is high.
For another example, in a payment scenario, a user may conduct a transaction via a terminal device, and may generate billing data when making a payment. A large number of users transact via terminal devices (i.e., clients) and may generate a large amount of billing data (i.e., messages), and the client of each user may send the billing data to the cache server. The same user may cancel the transaction after the transaction and then may re-conduct the transaction, at which time the multiple billing data for the same user are in order. Therefore, the requirement of the scenario on the sequence and the concurrency of the messages is high.
It should be noted that the above implementation scenarios are only exemplary, and the implementation scenarios to which the message processing method in the embodiment of the present application may be applied are not limited to this.
Based on the application scenario, the message processing method of the embodiment of the present application is described below.
Fig. 5 is a flowchart illustrating a message processing method provided in an embodiment of the present application, where the message processing method may be executed by a processing device, for example, the message processing method may be the processing server 300 in fig. 4, and referring to fig. 5, the method may include the following steps:
step S501, continuously acquiring each message generated by a message source from at least one message queue, and storing each acquired message into at least one target queue respectively every time each message is acquired; the messages with the same identification in each message are stored in the same target queue, and the messages with the same identification in one target queue are stored according to the arrangement sequence of the messages with the same identification in the corresponding message queue.
Wherein the at least one message queue may be located in a cache server, such as the cache server in fig. 2. The message source may be a terminal device that generates a message, and the terminal device may send the generated message to the cache server, and in addition, the cache server may also actively acquire the message from the terminal device. It should be noted that at least one message queue may also be located in the processing server, which is not limited in this embodiment of the present application.
Specifically, the same identifier may be set for messages of the same type generated by the message source, and the messages of the same type may be, for example, messages of the same account, messages of the same terminal device, messages of the same service, and the like. Messages with the same identity may be ordered messages in some scenarios.
In one example, when a user performs a transaction through a client of a terminal device, after a bill is submitted, a message related to the bill may be generated, different users may generate different messages, and different messages may have different identifications, for example, the identification may be an account identification of a logged-in client, a device identification, a bill identification, or the like. When a user submits a bill, cancels the bill again, and then submits the bill again, three messages with the same identification may be generated and ordered.
In another example, in a live scene, a viewer may vote for a main broadcast through a client of the viewer (referred to as a viewer side), at this time, the viewer side serves as a message source, a message may be generated when the viewer votes, different viewer sides may generate different messages, and different messages may have different identifiers, where the identifiers may be, for example, an account identifier of a logged-in viewer side, an equipment identifier of the viewer side, and the like. When a viewer votes and then cancels the vote, two messages having the same identification can be generated and ordered.
In a specific implementation, after obtaining each message generated by the message source, the cache server may allocate the messages with the same identifier to the same message queue when allocating each message to different message queues, and each message queue may have a message queue identifier, where the message queue identifier is used to distinguish different message queues, for example, message queue 1, message queue 2, and message queue 3. When the messages with the same identification are ordered messages, the order of the ordered messages is kept in the message queue so as to process the ordered messages according to the order of the ordered messages. In addition, the cache server may set a unique ID for each message, that is, multiple messages may have the same identifier, for example, messages of the same type may have the same identifier, but each message corresponds to an ID, and the messages may be distinguished by the ID.
Further, the processing server may obtain messages from one batch of at least one message queue, specifically, may obtain messages from at least one message queue periodically, for example, after obtaining a first batch of messages, obtain a second batch of messages after a preset time t, after obtaining a second batch of messages, obtain a third batch of messages after the preset time t, and so on. T may be set as required, which is not limited in the embodiment of the present application. Specifically, the number of messages acquired in each batch may be set according to needs, and the number of messages acquired in each batch may be the same or different.
In particular, the same batch of messages may come from the same message queue or from different message queues. Illustratively, the plurality of message queues include a message queue 1, a message queue 2, and a message queue 3, and when a certain batch of messages is obtained, a set number of messages may be obtained from any one of the message queue 1, the message queue 2, and the message queue 3; or respectively acquiring a certain number of messages from the message queue 1, the message queue 2 and the message queue 3, and acquiring a set number of messages in total; of course, a certain number of messages may be obtained from any two of the message queue 1, the message queue 2, and the message queue 3, respectively, so that a set number of messages may be obtained in total. And each message in the acquired batch of messages carries a message queue identifier, and the message queue to which each acquired message belongs can be determined according to the message queue identifier.
After a certain batch of messages are acquired by the processing server, the batch of messages can be sequentially distributed to different target queues according to the acquired sequence, the messages with the same identification are stored in the same target queue, one target queue can comprise a plurality of identified messages, and one target queue stores the messages with the same identification according to the sequence of the messages in the corresponding message queue.
Illustratively, in a certain batch of messages, including message 1, message 2, and message 3 with identifier 1, and message 4, message 5, and message 6 with identifier 2, when saving the batch of messages, message 1, message 2, and message 3 may be saved in the same target queue, message 4, message 5, and message 6 may be saved in the same target queue, and the message with identifier 1 and the message with identifier 2 may be saved in the same target queue, or may be saved in different target queues. The storage order of the message 1, the message 2 and the message 3 is consistent with the order of the three messages in the corresponding message queue, and the storage order of the message 4, the message 5 and the message 6 is consistent with the order of the three messages in the corresponding message queue.
In some embodiments, in order to store the messages with the same identifier in the obtained messages into the same target queue, the step S501 of storing the obtained messages into at least one target queue may specifically be implemented by the following steps:
step a, respectively carrying out hash operation on the identification of each message to obtain corresponding hash values;
and b, respectively storing each message into a target queue associated with the corresponding hash value.
Specifically, different identifiers may be represented by different keys, for example, for a message including key1, a hash operation may be performed on key1 to obtain corresponding hash values. It may be preset that each target queue is associated with a corresponding hash value, for example, the hash value associated with target queue 1 is H1, the hash value associated with target queue 2 is H2, and the hash value associated with target queue 3 is H3. When the hash value corresponding to the key1 is H1, all messages including the key1 are saved in the target queue 1. So that messages with the same identification can be saved to the same target queue.
Step S502, aiming at each target queue in at least one target queue, respectively adopting a processing thread to process each message in the corresponding target queue according to the sequence stored by the message, and obtaining the message processing record corresponding to each message.
Illustratively, the plurality of target queues includes a target queue 1, a target queue 2, and a target queue 3, and the three target queues respectively use a processing thread 1, a processing thread 2, and a processing thread 3 for message processing. Specifically, the processing thread 1 processes each message in the target queue 1 according to the message storage sequence, the processing thread 2 processes each message in the target queue 2 according to the message storage sequence, the processing thread 3 processes each message in the target queue 3 according to the message storage sequence, and then the message processing records of the three processing threads can be obtained.
Taking the processing thread 1 as an example, for example, the message 1, the message 2, the message 3, the message 4, the message 5, and the message 6 are sequentially stored in the target queue 1, and the processing thread sequentially processes the message 1, the message 2, the message 3, the message 4, the message 5, and the message 6 to obtain message processing records corresponding to the six messages.
In step S503, a message processing result is obtained based on each obtained message processing record.
As can be seen from the above description, each message has an ID, and the message processing record of each message may carry the ID of the message, so that which messages have been processed can be determined according to the ID of the message carried in the message processing record.
For example, in the live broadcast scenario, after the messages generated by voting the audience on the anchor broadcast through the audience port are processed in the steps S501 and S502, the obtained message processing records may be voting records corresponding to the processed messages, and then the voting results are counted according to the obtained voting records, so as to obtain the message processing results.
In the embodiment of the application, messages are obtained from a batch of message queues, each obtained message is respectively stored in a plurality of target queues once the message is obtained, and the messages with the same identification in each message are stored in the same target queue according to the arrangement sequence of the messages in the corresponding message queue, so that when the messages with the same identification are ordered messages, the ordered messages can keep the order. And for each target queue, processing each message in the target queue by adopting a processing thread according to the sequence stored by the messages, so that the ordered messages can be processed in sequence to ensure the processing sequence.
In addition, because the plurality of target queues are respectively processed by one processing thread, and the plurality of processing threads concurrently process the messages, one processing thread does not need to wait for other processing threads, but can continuously process the messages, and compared with a mode that after messages are pulled in batches in the related art, the messages are processed in the next batch after the batch of messages are processed, the concurrency of the embodiment of the application is better. Therefore, the method and the device for processing the ordered messages can ensure the processing sequentiality of the ordered messages and have good concurrency.
In some embodiments, since multiple processing threads process the acquired messages in parallel, and the obtained message processing records of the processed messages may be message processing records of messages from different message queues, after step S502, the following steps may be further performed:
step 1), determining the message processing record of each message from the corresponding message queue according to the message queue identification carried by each processed message for the message processing record corresponding to each currently processed message.
And step 2), arranging the message processing records of each message in the corresponding message queue according to the sequence of each message in the message queue to obtain the arranged message processing records corresponding to the corresponding message queue.
Further, the message processing record may include an offset of a corresponding message, which may indicate a position of a message in a corresponding message queue, for example, an offset of a first message in the message queue 1 may be 1, an offset of a second message in the message queue 1 may be 2, and so on. If a batch of messages is retrieved from different message queues, the offset for each message is the offset in the corresponding message queue.
In the related art, in a process that a processing server processes a message by using multiple threads, an offset of a message currently processed by a message queue is respectively recorded for each message queue, and a processed maximum offset is usually recorded, so that after the processing server is restarted due to some reason, the processing server continues to acquire a message after the maximum offset recorded in the message queue. However, there is a possibility that messages may be lost, for example, the offsets of the processed messages in the corresponding message queue are: 1. 2, 3, 5, and 6, at this time, if the sixth message is processed, it is considered that all messages before the sixth message are processed, and after the processing server is down and restarted, the messages arranged after the sixth message in the corresponding message queue are continuously acquired, and since the fourth message is not processed, the fourth message is lost. In order to prevent the message from being lost, the message processing method according to the embodiment of the present application may further include the following steps:
step 1, determining an offset sequence for a corresponding message queue according to the offset in the arranged message processing record corresponding to the message queue;
and 2, determining the maximum offset in the first continuous offset quantum sequence in the offset sequence, and recording the message corresponding to the maximum offset processed in the corresponding message queue.
For example, the arranged message processing records corresponding to the message queue 1 include a message processing record of a first message, a message processing record of a second message, a message processing record of a third message, a message processing record of a fifth message, and a message processing record of a sixth message, and the offset amounts are 1, 2, 3, 5, and 6, respectively, that is, the obtained offset sequence is 1, 2, 3, 5, and 6.
Illustratively, in the offset sequences 1, 2, 3, 5, 6, the first consecutive offset subsequence is 1, 2, 3, and the maximum offset therein is 3. That is, if the processing server is down and restarted, and the third message is recorded and displayed, at this time, the messages after the third message are considered to be not processed yet, the messages after the third message in the message queue 1 are continuously acquired, and the messages after the third message are continuously processed through the corresponding processing thread, so as to prevent the message 4 from being lost.
The message processing method of the embodiment of the application can not only ensure the sequentiality and concurrency of message processing, but also prevent the messages from being lost after the processing server is down and restarted.
Further, the arranged message processing records corresponding to the corresponding message queues obtained in the foregoing embodiment may be put into a buffer, and after the message corresponding to the maximum offset has been processed in the corresponding message queues is recorded, in order to release the buffer space, the message processing method according to the embodiment of the present application may further include the following steps:
and deleting the message processing records corresponding to the offset except the maximum offset in the first continuous offset quantum sequence.
For example, the first consecutive offset quantum sequence 1, 2, 3, wherein the offset other than the maximum offset3 is 1, 2, therefore, the message handling record of the first message and the message handling record of the second message in the buffer are deleted. This is because the first message and the second message have been processed and the current processing to the third message has been recorded, and there is no need to record the message processing record of one message and the message processing record of the second message. In this way, the buffer space can be freed.
And when the current processing of the third message is recorded, if the processing server is down to restart, at this time, the processing server will acquire the messages after the third message in the corresponding message queue, and if the fifth message and the sixth message are processed, because the message processing record of the fifth message and the message processing record of the sixth message are obtained before, the fifth message and the sixth message are not processed again, and the repeated processing of the messages is avoided.
Further, after recording the message corresponding to the maximum offset that has been processed in the corresponding message queue, the message processing method according to the embodiment of the present application may further include the following steps:
a) if the maximum offset in the offset sequence changes within the set time and the maximum offset in the first continuous offset quantum sequence does not change, alarming; alternatively, the first and second electrodes may be,
b) if the maximum offset in the offset sequence is increased by a set value and the maximum offset in the first continuous offset quantum sequence is not changed, alarming; alternatively, the first and second electrodes may be,
c) and if the maximum offset in the first continuous offset quantum sequence is not changed and the difference value between the maximum offset in the offset sequence and the maximum offset in the first continuous offset quantum sequence reaches a set threshold value, alarming.
For example, after the message processing record corresponding to the third message is recorded, if the maximum offset in the offset sequence changes from 6 to 10 within a set time, which indicates that the message in the message queue is being processed, but the offset corresponding to the fourth message is not yet obtained, that is, the maximum offset in the first consecutive offset quantum sequence is still 3, the processing thread may malfunction, and an alarm needs to be given. Alternatively, the maximum offset in the offset sequence is increased by a set value, for example, 50, from 6 to 56, and the maximum offset in the first consecutive offset quantum sequence is still 3, which may be a fault in the processing thread, and an alarm needs to be given. Or, the maximum offset in the first continuous offset quantum sequence is still 3, and the difference between the maximum offset in the offset sequence and 3 reaches a set threshold, for example, the set threshold is 50, the maximum offset in the offset sequence becomes 53, or a processing thread has a fault, and the like, and an alarm needs to be given to find out the existing fault in time.
Specifically, the setting time, the setting value, and the setting threshold may be set as needed, which is not limited in the embodiment of the present application.
In practical application, the message processing method of the embodiment of the application can be applied to a Kafka-based message processing system. The following describes a message processing method of a Kafka-based message processing system.
The Kafka-based message processing system comprises a message production end (which can be a server), a Kafka cluster consisting of a plurality of Broker servers, and a consumption end, wherein the consumption end can be the processing server. The message production terminal can obtain a large number of messages from a message source, the obtained messages are stored on different Broker servers in a partitioned mode according to certain keys (namely the identification of the messages) and algorithms (such as a hashing algorithm), and the messages with the same key are stored in the same Broker server. Specifically, a Broker server may deploy a queue fragment, i.e., the message queue in the above embodiment, for storing messages. The consuming end can read the message from the plurality of queue fragments for consumption. For the messages in the queue fragments, the Kafka message log marks the positions of the read messages by using an offset (offset), that is, the offset is the offset of the messages consumed in the queue fragments, and the consumption end determines the starting position of the next read message in the queue fragments by using the offset.
Fig. 6 shows a detailed flowchart of a message processing method of the Kafka-based message processing system. Referring to fig. 6, the message processing method includes the steps of:
step S601, the message production end respectively stores a large number of messages acquired from the message source into a plurality of queue fragments.
For example, as shown in fig. 7, the message producer stores a large number of messages obtained from a message source into queue segment 1 and queue segment 2 … …, where n is an integer greater than 1. Specifically, when messages with the same identification are included in a large number of messages, the messages with the same identification are stored in the same queue fragment so as to ensure the sequence of the messages with the same identification later.
Step S602, the consumption end continuously pulls each message from the plurality of queue fragments through a pulling thread, and each pulled message is respectively stored in a plurality of local queues; the messages with the same identification in each message are stored in the same local queue, and the messages with the same identification in one local queue are stored according to the arrangement sequence of the messages with the same identification in the corresponding queue fragment.
Specifically, the consumer may pull a batch of messages from the plurality of queue fragments through a pull thread, and the pull thread may pull a batch of messages from one of the queue fragments or pull a batch of messages from a plurality of queue fragments. The following description will take the example of pulling a message from one of the queue fragments. For each pulled message, the consuming end respectively stores the message into a local queue 1 and a local queue 2 … … local queue m shown in fig. 5, where m is an integer greater than 1, and a value of m may be the same as or different from a value of n.
In practical applications, the messages with the same identifier may be messages of the same type, so that when the messages of the same type are ordered messages, the ordered messages may be kept in order in the local queue.
To improve throughput, the consuming side may employ multiple processing threads for message processing. When a plurality of processing threads are used for processing, a main thread generally reads a batch of messages and executes grouping operation, then the plurality of processing threads process corresponding grouping messages, and after each processing thread finishes processing the corresponding grouping message, the offset of the processed message can be reported respectively, wherein the offset refers to the position of the message in the corresponding queue slice.
Specifically, the consuming side may store each pulled message in different local queues according to a certain key (i.e., an identifier of the message) and an algorithm (e.g., a hash algorithm, etc.), so that messages with the same identifier may be stored in the same local queue.
Step S603, the consuming side processes each message in the corresponding local queue by using a processing thread according to the sequence of message storage, to obtain a message processing record corresponding to each message, where the message processing record includes an offset of the message.
For example, as shown in fig. 7, the messages in the local queue 1 and the local queue 2 … … are processed by the processing thread 1 and the processing thread 2 … …, respectively, and one of the processing threads does not need to wait for the other processing thread to finish processing, but can process the messages continuously, and the concurrency is good. And for each local queue, processing each message in the local queue by adopting a processing thread according to the sequence stored by the messages, so that the ordered messages can be processed according to the sequence to ensure the processing sequence.
Step S604, the consumption end submits the offsets of the processed messages to the offset management module through each processing thread, and orders the obtained offsets through the offset management module to obtain an offset sequence.
It should be noted that, in the above embodiment, an obtained batch of messages comes from one queue segment is taken as an example, when an obtained batch of messages comes from multiple queue segments, the offset management module may further divide the obtained offset according to different queue segments, specifically, according to the carried queue segment identifier, and then perform offset sorting on each queue segment, so as to obtain an offset sequence corresponding to each queue segment.
In step S605, the consuming side submits the maximum offset in the first continuous offset quantum sequence in the offset sequence to the recording module for recording through the offset management module.
In the message processing method, each local queue adopts a processing thread to process each message in the local queue according to the sequence of message storage, so that the ordered messages can be processed in sequence to ensure the processing sequence. In addition, because the plurality of local queues respectively adopt one processing thread to process, the plurality of processing threads concurrently process the messages, one processing thread does not need to wait for other processing threads, but can continuously process the messages, and the concurrency is good. Therefore, the message processing method can ensure the processing sequentiality of the ordered messages and has good concurrency.
Further, as shown in fig. 8, after processing the message, each processing thread may submit the offset of the corresponding message to the offset management module, and the offset management module may sort the obtained offsets, for example, 1, 2, and 4, that is, the messages with offsets 1, 2, and 4 have been processed, and the message with offset3 has not been processed. At this time, the offset management module may submit the offset2 to the recording module for recording, so as to prevent the loss of the message with offset3 after the downtime restart.
Next, using fig. 9a and fig. 9b as an example, how to prevent the message loss after the downtime restart according to the embodiment of the present application is described.
As shown in fig. 9a, process 1 may represent a process before downtime, process 2 may represent a process after downtime restart, where offsets of messages processed by process 1 are 1, 2, and 4, and when corresponding messages are processed, the offsets are submitted to Kafka, and in the related art, Kafka submits the largest offset in the processed messages to process 2, and at this time, after process 2 acquires offset4, it is known that messages corresponding to offsets before offset4 have been processed, and messages corresponding to offsets after offset4 will continue to be processed, which may cause messages corresponding to offset3 to be lost.
As shown in fig. 9b, an offset management module is added to fig. 9a, process 1 submits the offset of the processed message to the offset management module, and each time the offset management module obtains one offset, the offset management module may store the offset, and then merge (i.e. sort) the stored multiple offsets, for example, the sequence of offsets obtained according to the submission order is 2, 1, and 4, and the sequence of offsets obtained after merging several offsets is 1, 2, and 4, at this time, because the message corresponding to offset3 is not processed yet, the offset management module may submit offset2 to process 2, at this time, after process 2 obtains offset2, it is known that the message corresponding to the offset before offset2 has been processed, and the message corresponding to the offset after offset2 will be processed continuously, so that the message corresponding to offset3 can be prevented from being lost.
When process 2 processes a message corresponding to offset4, the message corresponding to offset4 has been processed before and offset4 has been submitted, so that the message corresponding to offset4 is not processed again, and duplicate processing of the message is avoided.
Further, the offset management module can also clear the expired data in the stored offsets, such as the offset sequence 1, 2, 4, and clear the offset1 after the offset management module submits the offset2 to the process 2. This is because process 2 knows that the message corresponding to offset1 has been processed, and will continue to process messages corresponding to offsets after offset2, and does not need to store offset 1. In this way, the memory space of the offset management module can be freed.
The offset management module may also perform a fault alarm, for example, after submitting offset2 to process 2, if the maximum offset in the offset sequence changes from 4 to 10, for example, and the maximum offset is not obtained 3 at this time, that is, the message corresponding to offset3 is not processed, the processing thread may have a fault. Alternatively, the maximum offset in the offset sequence is increased by a set value, for example, by 50, from 4 to 54, and the offset3 is not obtained yet, or a processing thread has a fault, and an alarm needs to be given to find out the existing fault in time. Specifically, the setting time may be set as needed, which is not limited in the embodiment of the present application.
Based on the same inventive concept as the method embodiment, an embodiment of the present application further provides a message processing apparatus, which is shown in fig. 10 and includes a saving module 101, a processing module 101, and an obtaining module 103; wherein the content of the first and second substances,
a storing module 101, configured to continuously obtain each message generated by a message source from at least one message queue, and store each obtained message into at least one target queue each time the message is obtained; the messages with the same identification in each message are stored in the same target queue, and the messages with the same identification in one target queue are stored according to the arrangement sequence of the messages with the same identification in the corresponding message queue;
the processing module 102 is configured to, for each target queue in the at least one target queue, respectively use a processing thread to process each message in the corresponding target queue according to the sequence of message storage, and obtain a message processing record corresponding to each message;
an obtaining module 103, configured to obtain a message processing result based on each obtained message processing record.
Specifically, the storing module obtains messages from a batch of multiple message queues, and stores each obtained message in multiple target queues respectively each time the message is obtained, and the messages with the same identifier in each message are stored in the same target queue according to the arrangement sequence of the messages in the corresponding message queue, so that when the messages with the same identifier are ordered messages, the ordered messages can keep the order. The processing module processes each message in each target queue by adopting a processing thread according to the sequence stored by the messages, so that the ordered messages can be processed according to the sequence to ensure the processing sequence. In addition, because the plurality of target queues adopt one processing thread to process respectively, the plurality of processing threads process the messages concurrently, one processing thread does not need to wait for other processing threads, but can process the messages continuously, and the concurrency is good.
In a possible embodiment, each message obtained from at least one message queue carries a message queue identifier, and the apparatus further includes:
a determining module, configured to determine, for a message processing record corresponding to each currently processed message, a message processing record of each message from a corresponding message queue according to a message queue identifier carried by each processed message;
and the sequencing module is used for sequencing the message processing records of each message in the corresponding message queue according to the sequence of each message in the message queue to obtain the sequenced message processing records corresponding to the corresponding message queue.
In a possible embodiment, the message processing record includes offsets of corresponding messages, and the offsets of the messages obtained from the same message queue are continuously increased, and the apparatus further includes:
a determining module, configured to determine, for a corresponding message queue, an offset sequence according to an offset in an arranged message processing record corresponding to the message queue;
and the recording module is used for determining the maximum offset in the first continuous offset quantum sequence in the offset sequence and recording the message corresponding to the maximum offset processed in the corresponding message queue.
In one possible embodiment, the apparatus may further comprise:
and the deleting module is used for deleting the message processing records corresponding to the offset except the maximum offset in the first continuous offset quantum sequence.
In one possible embodiment, the apparatus may further comprise:
the alarm module is used for alarming if the maximum offset in the offset sequence changes and the maximum offset in the first continuous offset quantum sequence does not change within a set time; alternatively, the first and second electrodes may be,
if the maximum offset in the offset sequence is increased by a set value and the maximum offset in the first continuous offset quantum sequence is not changed, alarming; alternatively, the first and second electrodes may be,
and if the maximum offset in the first continuous offset quantum sequence is not changed and the difference value between the maximum offset in the offset sequence and the maximum offset in the first continuous offset quantum sequence reaches a set threshold value, alarming.
In a possible embodiment, the saving module 101 may specifically be configured to:
performing hash operation on the identifier of each message to obtain corresponding hash values;
and respectively storing each message into the target queue associated with the corresponding hash value.
Specifically, different identifiers may be represented by different keys, for example, for a message containing key1, key1 may be hashed to obtain corresponding hash values. It may be preset that each target queue is associated with a corresponding hash value, for example, the hash value associated with target queue 1 is H1, the hash value associated with target queue 2 is H2, and the hash value associated with target queue 3 is H3. When the hash value corresponding to the key1 is H1, all messages including the key1 are saved in the target queue 1. So that messages with the same identification can be saved to the same target queue.
The embodiment of the application also provides electronic equipment based on the same inventive concept as the embodiment of the method and the embodiment of the device. The electronic device may be a server, such as the processing server 300 shown in FIG. 4. In this embodiment, the electronic device may be configured as shown in fig. 11, and include a memory 301, a communication module 303, and one or more processors 302.
A memory 301 for storing a computer program for execution by the processor 302. The memory 301 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a program required for running an instant messaging function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The processor 302 may include one or more Central Processing Units (CPUs), or be a digital processing unit, etc. A processor 302 for implementing the above-described message processing method when calling the computer program stored in the memory 301.
The communication module 303 is configured to communicate with a terminal device to obtain voice data.
The specific connection medium among the memory 301, the communication module 303 and the processor 302 is not limited in the embodiments of the present application. In fig. 11, the memory 301 and the processor 302 are connected by a bus 304, the bus 304 is represented by a thick line in fig. 11, and the connection manner between other components is merely illustrative and not limited. The bus 304 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 11, but this is not intended to represent only one bus or type of bus.
Embodiments of the present application also provide a computer storage medium, and the computer program product or computer program includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the message processing method in the above-described embodiment.
In some possible embodiments, various aspects of the method for determining multimedia content provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of the message processing method according to various exemplary embodiments of the present application described above in this specification when the program product runs on the computer device, for example, the computer device may perform the flow of message processing in steps S501 to S503 shown in fig. 5.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A message processing method, comprising:
continuously acquiring each message generated by a message source from at least one message queue, and respectively storing each acquired message into at least one target queue once the message is acquired; wherein, the messages with the same identification in each message are stored in the same target queue, and each message with the same identification in one target queue is stored according to the arrangement sequence of each message with the same identification in the corresponding message queue;
aiming at each target queue in the at least one target queue, respectively adopting a processing thread to process each message in the corresponding target queue according to the sequence stored by the messages to obtain a message processing record corresponding to each message;
and obtaining a message processing result based on the obtained message processing records.
2. The method of claim 1, wherein each message obtained from at least one message queue carries a message queue identifier, and the method further comprises:
for the message processing records corresponding to each currently processed message, determining the message processing records of each message from the corresponding message queue according to the message queue identification carried by each processed message;
and arranging the message processing records of each message in the corresponding message queue according to the sequence of each message in the message queue to obtain the arranged message processing records corresponding to the corresponding message queue.
3. The method of claim 2, wherein the message processing record includes an offset of the corresponding message in the corresponding message queue, and the offset of each message retrieved from the same message queue is continuously increased, the method further comprising:
for the corresponding message queue, determining an offset sequence according to the offset in the arranged message processing record corresponding to the message queue;
and determining the maximum offset in the first continuous offset quantum sequence in the offset sequence, and recording the message corresponding to the maximum offset processed in the corresponding message queue.
4. The method of claim 3, wherein after recording the message corresponding to the maximum offset processed in the corresponding message queue, the method further comprises:
and deleting the message processing records corresponding to the offset except the maximum offset in the first continuous offset quantum sequence.
5. The method of claim 3, wherein after recording the message corresponding to the maximum offset processed in the corresponding message queue, the method further comprises:
if the maximum offset in the offset sequence changes within a set time and the maximum offset in the first continuous offset quantum sequence does not change, alarming; alternatively, the first and second electrodes may be,
if the maximum offset in the offset sequence is increased by a set value and the maximum offset in the first continuous offset quantum sequence is not changed, alarming; alternatively, the first and second electrodes may be,
and if the maximum offset in the first continuous offset quantum sequence is not changed and the difference value between the maximum offset in the offset sequence and the maximum offset in the first continuous offset quantum sequence reaches a set threshold value, alarming.
6. The method according to any one of claims 1 to 5, wherein the storing each acquired message in at least one target queue respectively comprises:
respectively carrying out hash operation on the identification of each message to obtain a corresponding hash value;
and respectively storing the messages into the target queues associated with the corresponding hash values.
7. A message processing apparatus, comprising:
the storage module is used for continuously acquiring each message generated by a message source from at least one message queue, and storing each acquired message into at least one target queue respectively every time each message is acquired; wherein, the messages with the same identification in each message are stored in the same target queue, and each message with the same identification in one target queue is stored according to the arrangement sequence of each message with the same identification in the corresponding message queue;
the processing module is used for processing each message in the corresponding target queue by adopting a processing thread according to the sequence stored by the messages respectively aiming at each target queue in the at least one target queue to obtain a message processing record corresponding to each message;
and the acquisition module is used for acquiring a message processing result based on the acquired message processing records.
8. The apparatus of claim 7, wherein each message obtained from at least one message queue carries a message queue identifier, and the apparatus further comprises:
a determining module, configured to determine, for a message processing record corresponding to each currently processed message, a message processing record of each message from a corresponding message queue according to a message queue identifier carried by each processed message;
and the sequencing module is used for sequencing the message processing records of each message in the corresponding message queue according to the sequence of each message in the message queue to obtain the sequenced message processing records corresponding to the corresponding message queue.
9. An electronic device, characterized in that it comprises a processor and a memory, wherein the memory stores program code which, when executed by the processor, causes the processor to carry out the steps of the method according to any one of claims 1-6.
10. A computer-readable storage medium, characterized in that it comprises program code for causing an electronic device to perform the steps of the method of any of claims 1-6, when said program code is run on the electronic device.
CN202011450058.8A 2020-12-09 2020-12-09 Message processing method and device, electronic equipment and storage medium Pending CN114610504A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011450058.8A CN114610504A (en) 2020-12-09 2020-12-09 Message processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011450058.8A CN114610504A (en) 2020-12-09 2020-12-09 Message processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114610504A true CN114610504A (en) 2022-06-10

Family

ID=81856189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011450058.8A Pending CN114610504A (en) 2020-12-09 2020-12-09 Message processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114610504A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116821117A (en) * 2023-08-30 2023-09-29 广州睿帆科技有限公司 Stream data processing method, system, equipment and storage medium
CN116820795A (en) * 2023-04-18 2023-09-29 上海百秋新网商数字科技有限公司 Method and system for accelerating message processing speed and maintaining processing sequence

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116820795A (en) * 2023-04-18 2023-09-29 上海百秋新网商数字科技有限公司 Method and system for accelerating message processing speed and maintaining processing sequence
CN116821117A (en) * 2023-08-30 2023-09-29 广州睿帆科技有限公司 Stream data processing method, system, equipment and storage medium
CN116821117B (en) * 2023-08-30 2023-12-12 广州睿帆科技有限公司 Stream data processing method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109949111B (en) Electronic bill identification distribution method, electronic bill generation method, device and system
US11228570B2 (en) Safe-transfer exchange protocol based on trigger-ready envelopes among distributed nodes
JP6827564B2 (en) Performing parallel execution of transactions in a distributed ledger system
KR102566892B1 (en) Blockchain consensus method, device and system
US20190354397A1 (en) Prioritization in a permissioned blockchain
US20190034465A1 (en) Blockchain logging of data from multiple systems
CN113254466B (en) Data processing method and device, electronic equipment and storage medium
CN111400112B (en) Writing method and device of storage system of distributed cluster and readable storage medium
CN111698315B (en) Data processing method and device for block and computer equipment
CN113347164A (en) Block chain-based distributed consensus system, method, device and storage medium
US11556874B2 (en) Block creation based on transaction cost and size
CN109783151B (en) Method and device for rule change
CN114610504A (en) Message processing method and device, electronic equipment and storage medium
US8966047B2 (en) Managing service specifications and the discovery of associated services
CN112291372A (en) Asynchronous posting method, device, medium and electronic equipment for block chain
CN110910143A (en) Identity identification generation method, device, related node and medium
CN110597922A (en) Data processing method, device, terminal and storage medium
US11487736B2 (en) Blockchain transaction processing systems and methods
CN110837609A (en) Social information processing method and related product
CN112200680B (en) Block link point management method, device, computer and readable storage medium
CN110602215B (en) Resource processing method based on alliance block chain and alliance block chain system
CN112825525A (en) Method and apparatus for processing transactions
CN115842866A (en) Data processing method and device, computer readable medium and electronic equipment
CN111212037B (en) Advertisement data processing method and device
US20180007130A1 (en) Peer-to-Peer Assisted Personal Synchronization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination