CN117170891A - Message processing method, device and equipment - Google Patents

Message processing method, device and equipment Download PDF

Info

Publication number
CN117170891A
CN117170891A CN202310512131.7A CN202310512131A CN117170891A CN 117170891 A CN117170891 A CN 117170891A CN 202310512131 A CN202310512131 A CN 202310512131A CN 117170891 A CN117170891 A CN 117170891A
Authority
CN
China
Prior art keywords
target
queue
message
messages
consumer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310512131.7A
Other languages
Chinese (zh)
Inventor
廖建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310512131.7A priority Critical patent/CN117170891A/en
Publication of CN117170891A publication Critical patent/CN117170891A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a message processing method, a message processing device and message processing equipment, and belongs to the technical field of communication. The message processing method applied to the message middleware comprises the following steps: determining the number of messages which can be received by a target consumption end according to a target queue pre-configured by the middleware server, wherein the target queue is a queue of the target consumption end; if the number of receivable messages of the target consuming end is greater than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a target thread pool of the middleware server, wherein the cache queue corresponds to the target queue and is used for caching the message pulled by the middleware server from the target producing end; the target message is a message positioned at the head of a queue in the cache queue; pushing the target information in the target thread pool to the target consumption end.

Description

Message processing method, device and equipment
Technical Field
The present application belongs to the technical field of communications, and in particular, relates to a message processing method, device and equipment.
Background
Message Queue (MQ) is a first-in, first-out (FIFO) based Queue model middleware. The message may be returned immediately after transmission, with reliable delivery of the message being ensured by the messaging system. The message Producer (Producer) only needs to publish the message to the MQ, and does not need to take the message and how to take the message; the message Consumer (Consumer) only takes the message from the MQ regardless of who is published and how.
In the prior art, messages are uniformly placed in a message queue, and a program single thread or multiple threads are started to consume and process the messages. Thus, it is often necessary to allocate threads individually for each MQ to consume messages, however, when the data flow is too large, the data flow cannot be processed in time due to the fixed thread processing capability, resulting in thread blocking.
Disclosure of Invention
The embodiment of the application aims to provide a message processing method, a device and equipment, which can improve the efficiency of message processing.
In a first aspect, an embodiment of the present application provides a message processing method applied to a middleware server, including:
determining the number of messages which can be received by a target consumption end according to a target queue pre-configured by the middleware server, wherein the target queue is a queue of the target consumption end;
If the number of receivable messages of the target consuming end is greater than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a target thread pool of the middleware server, wherein the cache queue corresponds to the target queue and is used for caching the message pulled by the middleware server from the target producing end; the target message is a message positioned at the head of a queue in the cache queue;
pushing the target information in the target thread pool to the target consumption end.
In a second aspect, an embodiment of the present application provides a message processing apparatus applied to a middleware server, including:
the first determining module is used for determining the number of messages which can be received by a target consuming end according to a target queue which is preconfigured by the middleware server, wherein the target queue is a queue of the target consuming end;
the first transmission module is used for transmitting the target message in the cache queue to a target thread pool of the middleware server if at least one message is cached in the cache queue pre-configured by the middleware server under the condition that the number of the receivable messages of the target consuming end is larger than zero, wherein the cache queue corresponds to the target queue and is used for caching the message pulled by the middleware server from the target production end; the target message is a message positioned at the head of a queue in the cache queue;
And the first pushing module is used for pushing the target information in the target thread pool to the target consumption end.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, the program or instructions implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, including a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
According to the embodiment of the application, the number of the messages which can be received by the target consumption terminal can be determined according to the target queue which is preconfigured by the middleware server; if the number of receivable messages of the target consuming end is greater than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a target thread pool of the middleware server; and pushing the target message in the target thread pool to the target consumer. Based on the number of receivable messages of the target consumption end, the target thread pool is utilized to push the messages in the cache queue associated with the target queue to the target consumption end, so that the situation that the messages occupy the resources of the thread pool to cause blocking can be avoided under the condition that the target consumption end cannot receive the messages, the message pushing efficiency is improved, and the computing resources are saved.
Drawings
FIG. 1 is a flow chart of a message processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a message processing process provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of a message processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
amqp: (full name: advanced Message Queuing Protocol) is an advanced message queuing protocol. Is designed by the Morgan Datong group together with other companies. The method is an application layer standard advanced message queue protocol for providing unified message service, is an open standard of the application layer protocol, and is designed for message-oriented middleware. The client and the message middleware based on the protocol can transmit the message and are not limited by different products, different development languages and other conditions of the client/middleware.
Message middleware: middleware for establishing channels, transmitting data or files in a system requiring network communication. An important role of message middleware is to operate across platforms, facilitating the integration of application software on different operating systems.
Rabhitmq is open source message broker software (also known as message oriented middleware) that implements the advanced message queuing protocol (AMQP, advanced Message Queuing Protocol). The RabbitMQ server is written in the Erlang language for storing forwarding messages in a distributed system.
Broker in RabbitMQ: a message queue service process, the process comprising two parts: exchange and Queue.
Exchange: and the message queue switch forwards the message route to a certain queue according to a certain rule, and filters the message.
Queue in RabbitMQ: a queue storing messages, the messages arriving at the queue and forwarded to a designated message consumer.
Producer: message producer, i.e. producer client, which sends messages
Consumer: the message consumer, the consumer client, receives the MQ forwarded message.
RocketMQ: a distributed messaging and streaming data platform with low latency, high performance, high reliability, trillion levels of capacity and flexible scalability.
Broker in RocketMQ: the core module of the RocketMQ is responsible for receiving and storing messages, and simultaneously provides a Push/Pull interface to send messages to Consumer.
MQ-proxy: as a bridge between the original rabkitmq-SDK and the dockmq, the method is responsible for producing the mutual conversion between the consumption amqp protocol and the dockmq private communication protocol.
Message middleware extends inter-process communications in a distributed environment by providing a messaging and message queuing model. In an actual application environment, for example, a short message notification service, a data statistics service and the like rely on message middleware to consume the message to complete own business logic. For Message middleware, common roles are roughly producer (producer), consumer (consumer), and Message Queue (Message Queue), where the producer is responsible for producing messages, the Message middleware stores the messages to the Message Queue, and the consumer obtains the messages from the Message Queue and consumes the messages. Where a Message refers to data transferred between applications. The message may be very simple, e.g. contain text strings only, or may be more complex, possibly containing embedded objects. Message queues are a communication mode between applications, and messages can be returned immediately after being sent, so that a message system ensures reliable transfer of the messages. The producer only has to issue messages into the message queue regardless of whom they take, and the consumer only has to take messages from the message queue regardless of whom they issue. So that neither the producer nor the consumer knows the presence of the other. Therefore, under the condition of large traffic, decoupling, asynchronization, peak clipping, valley filling and the like among systems can be realized by using the message middleware.
However, the consumption capabilities of message middleware are limited by the architecture and design of the message middleware itself. For example, rabhitmq is open source message broker software (also called message oriented middleware) implementing advanced message queuing protocol (AMQP, advanced Message Queuing Protocol) and is written in Erlang language, and has better MQ performance and richer management interface due to the characteristics of Erlang language, and also has larger-scale application in internet companies. However, as the service continues to increase, the message volume increases, the complexity of the service requirement scene increases, and a higher requirement is put on the message middleware platform.
The inventor researches and discovers that the message processing capability of the message middleware can be improved by introducing the RocketMQ to replace the underlying RabbitMQ message middleware, the MQ-proxy creates a RocketMQ message pushing instance, and the messages of a Broker in the RocketMQ are pushed to the RabbitMQ consumption client. That is, the MQ-proxy connects the robbi MQ consumption client and the Broker in the dockmq, the producer message of the producer is obtained through the Broker in the dockmq, and each time a producer message is obtained, the producer message is put into the thread pool to push the message, so as to obtain the consumer instance in the amb pconsumer queue (message queue, used for storing consumer instances, including the message pushing method) corresponding to the robbi MQ consumption client, and the producer message is sent to the robbi MQ consumption client through the consumer instance.
However, the above scheme has the following drawbacks:
1. each message queue corresponding to the rabitmq consuming client creates a thread to push the messages acquired by the Broker in the docketmq to the rabitmq client, and most consuming threads may be blocked, do not actually perform message pushing, and are ineffective to consume resources.
Message queues with low consumption client and high consumption speed of RabbitMQ occupy the same thread resources, and cannot exert the message pushing performance of machine resources to the maximum.
When a production message is obtained, the production message is put into a thread pool to push the message, and when the consumption speed of a client is too slow, a consumption thread cannot obtain a consumable AmqpConsumer (consumer example) from an AmqpConsumer queue, the consumption thread is blocked until the client consumes a message, and after ack (acknowledgement of a consumer) is returned, an AmqpConsumer is newly added to the AmqpConsumer queue. At this time, the consuming threads cannot be maximally utilized, and for this reason, in order to make consumption among each message queue not mutually affected, each message queue needs to create a consuming thread pool, and cannot utilize limited thread resources to push massive messages, and meanwhile, the thread resources utilized by each message queue are the same, so that resource waste is caused.
Based on the technical problems, the inventor proposes that the underlying RabbitMQ message middleware can be replaced by introducing the RocketMQ, meanwhile, in order to facilitate that the client can continue to use the message service of the RocketMQ with the SDK of the RabbitMQ, and avoid the code transformation cost of upgrading the service to the SDK of the RocketMQ, the message gateway can support the AMQP protocol to push the elimination of the RocketMQ to the SDK client of the RabbitMQ.
The message processing method, device, equipment and storage medium provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a flow chart of a message processing method applied to a middleware server according to an embodiment of the present application. The message processing method applied to the middleware server may include the following S101 to S103:
s101: and determining the number of messages which can be received by the target consumer according to the target queue preconfigured by the middleware server. The target queue is a queue of the target consumer.
S102: and if the number of the receivable messages of the target consuming end is larger than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a target thread pool of the middleware server.
The cache queue corresponds to the target queue, and is used for caching the message pulled by the middleware server from the target production end; the target message is the message at the head of the queue in the cache queue.
S103: pushing the target information in the target thread pool to the target consumer.
The specific implementation of each of the above steps will be described in detail below.
According to the embodiment of the application, the number of the messages which can be received by the target consumption terminal can be determined according to the target queue which is preconfigured by the middleware server; if the number of receivable messages of the target consuming end is greater than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a target thread pool of the middleware server; and pushing the target message in the target thread pool to the target consumer. Based on the number of receivable messages of the target consumption end, the target thread pool is utilized to push the messages in the cache queue associated with the target queue to the target consumption end, so that the situation that the messages occupy the resources of the thread pool to cause blocking can be avoided under the condition that the target consumption end cannot receive the messages, the message pushing efficiency is improved, and the computing resources are saved.
In the embodiment of the application, the consumption end and the production end can be specifically devices with communication functions such as mobile phones, tablet computers, intelligent televisions, notebook computers, ultra-mobile personal computers (UMPC), handheld computers, netbooks, personal digital assistants (Personal Digital Assistant, PDA), wearable electronic devices, vehicle-mounted devices, virtual reality devices and the like.
In S101, the number of messages receivable by the target consumer is determined according to a target queue preconfigured by the middleware server, where the target queue is a queue of the target consumer.
Here, the middleware server may be a message gateway, a message system, for receiving a message produced by a production end, storing and transmitting the message to a consumption end, i.e. for message transfer between the message production end and the consumption end.
The target queue stores message receipt information (i.e., information indicating the number of messages that the target consumer can receive), from which the number of messages that the target consumer can receive can be determined. The message receiving information may be set according to the consumption capability of the target consumption end, that is, the number of messages receivable by the target consumption end.
Specifically, after the middleware server establishes a connection with the target consumer, a target queue is created in advance, the message receiving information is stored in the target queue, and the target queue is updated in the process of receiving and consuming the message by the target consumer. Thus, the number of messages receivable by the target consumer can be determined through the target queue corresponding to the target consumer. In some embodiments, the number of receivable messages at the target consumer may be indicated by storing a preset number of elements in the target queue, and then monitoring the preset number of elements in the target queue by a semaphore mechanism.
Thus, in some embodiments, before S101, the foregoing step may further include:
creating a target queue, wherein the target queue comprises a preset number of elements, and the elements are used for indicating a transmission channel between a middleware server and a target consumption end;
a semaphore is created based on the number of elements in the target queue, the semaphore indicating the number of messages that the target consumer may receive.
The creating of the target queue may be that the middleware server creates the target queue for the target consuming end under the condition that the target consuming end and the middleware server are connected, then stores a preset number of elements in the target queue according to the consuming capacity of the target consuming end, the preset number may be determined according to the number of historical received messages of the target consuming end, or may be set empirically, and indicates the number of initial receivable messages of the target consuming end through the preset number of elements. In one example, when the RabbitMQ-SDK consumption client (i.e., the target consumption end) establishes a connection with the middleware server, the middleware server generates a preset number of elements according to the consumption capability of the RabbitMQ-SDK consumption client and stores the elements in the target queue according to a consumption request sent by the RabbitMQ-SDK consumption client for the first time.
The middleware server can acquire the element from the target queue, and push the message to the target consumption end through the connection channel information in the element.
The semaphore is an integer quantity S for representing the number of resources, and S is greater than or equal to zero, which represents the number of resource entities available for concurrent processes, and when S is less than zero, the number of processes waiting for using the critical section is represented. It is different from the general integer, except for initialization, which can be described by two standard Atomic Operation (S) wait and signal (S) operations: wait (S): while S < = 0do no-op; s =s-1; signal (S): S =s+1. In this embodiment, the semaphore may be used to indicate the number of elements in the target queue, and after the target queue is created, the semaphore is created for the target queue, and the number of elements in the target queue may be determined by the semaphore, so as to determine the number of messages receivable by the target consumer.
In some embodiments, before S101, the method may further include:
creating a cache queue corresponding to the target queue;
obtaining a production message from a target production end;
the production message is cached to a cache queue.
Specifically, a buffer queue corresponding to the target queue may be created at the same time as or after the target queue is created, for buffering production information acquired from the target production side, whereby each target queue corresponds to one buffer queue. The target queue is used for indicating the number of receivable messages of the target consuming end and storing the connection channel information, and the cache queue corresponding to the target message queue is used for caching production messages pushed to the target consuming end by the target consuming end.
Alternatively, the target queue or the buffer queue may be any blocking queue, for example, array Blocking Queue (blocking queue implemented based on an array), linked Blocking Queue (blocking queue implemented based on a linked list), delayQueue, and the like. The blocking queue can be replaced by other queues, and only needs to have a storage function for temporarily storing the message receiving information of the target consumer.
In S102, if the number of receivable messages at the target consuming end is greater than zero, if at least one message is cached in a cache queue preconfigured by the middleware server, transmitting the target message in the cache queue to a target thread pool of the middleware server.
The cache queue corresponds to the target queue, and is used for caching the message pulled by the middleware server from the target production end; the target message is the message at the head of the queue in the cache queue. The target queue is a queue of the target consumer, the target queue corresponds to a cache queue, and the cache queue is used for caching the message pulled from the production end, so that the message of the production end can be ensured to be accurately pushed to the target consumer end through the corresponding relation between the target queue and the cache queue.
The buffer queue may be any kind of blocking queue. The cache queue stores or retrieves messages in a first-in first-out manner.
The middleware server needs one thread to forward each message, but each thread has shorter execution time, and unnecessary overhead is brought to the system if threads are frequently created and destroyed. Thus in the present application a thread pool is used to distribute messages. The message processing thread pool is packaged on the basis of thread types, the threads in the thread pool are message processing threads, and each thread takes out the message from the cache queue and distributes the message. Thread pools are a form of multi-threaded processing in which tasks are added to a queue and then automatically started after a thread is created. The message is transmitted to the target thread pool, and a method corresponding to the message is run in the target thread pool to execute the message pushing task.
Specifically, when the number of receivable messages of the target consuming end is greater than zero, which indicates that the target consuming end has the capability of consuming the messages, the target message is fetched from the cache pair column and transmitted to the target thread pool. Under the condition that the target consumption end can receive the information, the target information in the cache queue is transmitted to the target thread pool, so that the information transmitted to the target thread pool can be pushed to the target consumption end in time, and occupation of the message pushing task to the target thread pool is avoided. Therefore, under the condition that the consumption capacity of the consumption end is smaller than the message production capacity of the production end, the message exceeding the consumption capacity of the target consumption end is cached in the cache queue in advance, so that unnecessary occupation of a consumption thread pool is avoided, and the utilization rate of computing resources is improved.
In order to accurately judge the number of receivable messages of the target consumer, only one message is extracted from the cache queue at a time and transmitted to the target thread pool each time when the number of receivable messages of the target consumer is determined to be not zero, after one message is transmitted to the target thread pool, the number of receivable messages of the target consumer needs to be updated, and after the target consumer processes one message, the number of receivable messages of the target consumer also needs to be updated.
Thus, in some embodiments, the number of elements in the target queue matches the number of messages that the target consumer can receive;
after the target message in the buffer queue is transferred to the target thread pool of the middleware server in S102, the method may further include the following steps:
one element is reduced in the target queue to update the target queue.
Specifically, the target queue of the target consuming end prestores acceptable message information of the target consuming end, the number of elements stored in the target queue is matched with the receivable messages of the target consuming end (for example, 8 elements are stored in the target message queue, which means that the target consuming end can receive 8 messages), after the target messages in the cache queue are transmitted to the target thread pool of the middleware server, the target messages are pushed to the target consuming end, and the number of the receivable messages of the target consuming end is reduced by one in the process of receiving the target messages and processing the target messages by the target consuming end, so that after the target messages in the cache queue are transmitted to the target thread pool of the middleware server, the number of the elements in the target message queue is reduced by one element, and the number of the receivable messages of the target consuming end is kept consistent.
In some embodiments, the reducing the element in the target queue by one element may include the following steps:
extracting a target element from a target queue, wherein the target element is any element in the target queue, and is used for indicating a transmission channel between a middleware server and a target consumption end;
pushing the target message in the target thread pool to the target consumer, comprising:
and pushing the target information in the target thread pool to a target consumption end through a transmission channel.
The element in the target queue comprises a transmission channel between the target consumption end and the middleware server, so that after the target message in the cache queue is transmitted to the target thread pool of the middleware server, one element can be taken out of the target queue, and the target message in the target thread pool is pushed to the target consumption end according to the transmission channel in the element. Therefore, the number of the elements in the target queue can be kept consistent with the receivable information capacity of the target consumption end, the accuracy of judging the receivable information quantity of the target consumption end is improved, and the target information can be accurately transmitted to the target consumption end according to the transmission channel in the elements.
In some embodiments, after pushing the target message in the target thread pool to the target consumer, the method further includes:
receiving a feedback message sent by a target consumption end, wherein the feedback message is used for indicating that the target consumption end finishes consumption based on the target message;
an element is added to the target queue in response to the feedback message to update the target queue.
Specifically, after the target consumer receives the target message, the target consumer needs to process the target message, in the process of processing the target message, the number of receivable messages of the target consumer is reduced by one, and after the target consumer processes the target message, a feedback message is sent to the middleware server, after the middleware server receives the feedback message, the target consumer can determine that the target consumer processes one message, and the target consumer can receive one message again, so that an element is added in the target queue, and the target queue updates the element in the queue according to the consumption condition of the target consumer.
It should be noted that the elements in the target queue may be the same element or different elements, that is, each element may include the same transmission channel information or may include different transmission channel information, so long as the target message may be pushed to the target consumer through the transmission channel.
In this embodiment, the number of acceptable messages of the target consuming end and the element in the target queue may be kept consistent, and the number of acceptable messages of the target consuming end may be accurately determined.
In one example, the middleware server establishes a connection with a target consuming end, a target queue is pre-established, n elements are pre-stored in the pre-established target queue, the elements are the message receiving information, the n elements indicate that the target consuming end can receive n messages, the specific value of n can be set according to the actual situation of the target consuming end, the application is not limited, when one message is pushed to the target consuming end, one element is taken out from the target queue, the signal quantity of the target queue is reduced by one, and the number of the messages which can be received by the target consuming end at the moment is reduced by one; after the target consumption end processes a message, feedback information is sent to the middleware server, after the middleware server receives the information to be fed back, an element is stored in the target queue, and the signal quantity of the target queue is increased by one, which means that the target consumption end can receive the message at the moment by one. Thus, the number of receivable messages at the target consumer can be determined by the number of elements in the target queue.
The middleware server may be connected to a plurality of target consuming ends and a plurality of producing ends, so in some embodiments, N first queues and N second queues are preconfigured in the middleware server, where the N first queues are queues of N consuming ends, the N first queues correspond to the N second queues, and N is a positive integer;
the determining the number of the messages receivable by the target consumer according to the target queue preconfigured by the middleware server includes:
in the case where the number of receivable messages at the kth-1 consumer is zero, determining the number of receivable messages at the kth consumer based on the first queue of the kth pre-configured middleware server,
the target queue is the Kth first queue, the target consuming end is the Kth consuming end, the cache queue is the queue corresponding to the Kth first queue in the N second queues, and K is an integer greater than 1 and less than or equal to N.
Specifically, firstly, when the number of receivable messages of the kth-1 consuming end is determined to be zero, it is described that the kth-1 consuming end cannot receive new messages any more, and if the messages in the kth-1 second queue are transmitted to the target thread pool, the messages will be blocked in the target thread pool, resulting in a waste of computing resources, so that the number of receivable messages of the next consuming end (i.e. the kth consuming end) can be determined, the kth first queue is determined as a target queue, the queues corresponding to the kth first queue in the N second queues are determined as cache queues, and the steps of S102 and S103 are executed.
In this embodiment, the middleware server may be connected to multiple target consuming ends and multiple producing ends, and may set a queue for storing information of messages received by the consuming ends for each consuming end, for example, the first queues are each corresponding to a second queue, where the second queue is used to temporarily cache messages pulled from the producing ends, and based on the number of acceptable messages of the consuming ends, push, through the target thread pool, the messages in the second queue to the consuming ends corresponding to the corresponding first queues, so as to ensure that the messages transmitted to the consuming thread pool can be timely pushed to the consuming ends for consumption processing, thereby avoiding thread blocking in the consuming thread pool.
In some embodiments, when the target queue is the kth first queue, the target consumer is the kth consumer, and the cache queue is a queue corresponding to the kth first queue in the N second queues, the step S102 may include the following steps:
and if the number of receivable messages of the Kth consumer is greater than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a public thread pool of the middleware server, wherein the public thread pool is a target thread pool and is used for receiving the messages transmitted by N second queues.
In order to push mass information of a plurality of production ends to a plurality of consumption ends for consumption through limited threads, information is pushed between each corresponding consumption end and the second queue through a common thread pool. Specifically, a first queue is created for each consumer, each first queue corresponds to a second queue, the middleware server pulls the messages produced by each production end, stores the messages in the corresponding second queues, determines the number of receivable messages of each consumer through polling and detecting element information in the first queues corresponding to each consumer, acquires a message from the corresponding second queues (cache queues, i.e. queues corresponding to the Kth first queue in N second queues) for each consumer when the number of receivable messages of one consumer, i.e. the Kth consumer, is detected to be non-zero, puts the message into a common thread pool, and then continues to execute step S103 to push the messages in the common thread pool to the corresponding consumer. Therefore, each message transmitted to the public thread pool can be pushed to the corresponding consumption end in time, so that the pushing of mass messages can be realized by only setting one public thread pool,
In some embodiments, N first queues and N second queues are preconfigured in the intermediate server, where the N first queues are queues of N consumers, the N first queues correspond to the N second queues, and N is a positive integer;
in S101, determining the number of messages receivable by the target consumer according to the target queue preconfigured by the middleware server may include the following steps:
under the condition that the number of push messages of the Kth-1 consumer is not less than a preset threshold value, determining the number of receivable messages of the Kth consumer according to a Kth first queue preconfigured by a middleware server,
the number of the push messages is the number of the messages pushed to the Kth-1 consumer end by the target thread pool, and in the process of pushing the messages corresponding to the number of the messages to the Kth-1 consumer end by the target thread pool, the messages are not pushed to the first consumer end by the target thread pool, and the first consumer end is different from the target consumer end.
In some possible cases, if the message processing speed of a certain consumer end is too high, that is, the message received by the consumer end can be processed quickly, so that the number of receivable messages of the consumer end is always greater than zero, the middleware server always transmits the message in the second queue corresponding to the consumer end to the public thread pool, and other consumer ends cannot push the message. Thus, in order to avoid the occurrence of the above-described situation, the following steps may be performed:
When the number of receivable messages of the Kth consumer is detected to be larger than zero, after pushing a message to the Kth consumer, recording that the number of the messages pushed to the Kth consumer is x, and x is the number of the messages continuously pushed to the Kth consumer, if the number of receivable messages of the Kth consumer is still larger than zero, pushing a message to the Kth consumer again, recording that the number of the messages pushed to the Kth consumer is x+1, circularly executing the steps until the number of the messages pushed to the Kth consumer is not smaller than a preset value (such as 1000, and can be set by itself), pushing the messages to the next consumer, namely determining the number of receivable messages of the Kth consumer according to the Kth first queue, and circularly executing the same: when the number of receivable messages of the Kth consumer is detected to be larger than zero, after pushing a message to the Kth consumer, recording the number of the messages pushed to the Kth consumer, if the number of the receivable messages of the Kth consumer is still larger than zero, pushing a message to the Kth consumer again, and after pushing a message to the Kth consumer, recording that the number of the messages pushed to the Kth consumer is increased by one until the number of the messages pushed to the Kth consumer is not smaller than a preset value.
In the embodiment, the message can be pushed to the Kth-1 consuming end more under the condition that the consumption speed of the Kth-1 consuming end is high, and the message is pushed to the Kth consuming end again under the condition that the number of the messages pushed to the Kth-1 consuming end reaches a certain threshold value, so that the message is pushed to the consuming end more with high consumption speed and the message is pushed to the consuming end less with low consumption speed, and the message pushing efficiency is improved.
In order to facilitate understanding of the message processing method provided in this embodiment, a description is provided herein of a practical application of the message processing method, as shown in fig. 2, specifically referring to the following examples:
the initiation of the RabbitMQ-SDK consumption client (hereinafter referred to as consumption end) can call the metadata service to acquire the middleware server Proxy node, and establish connection with the middleware server (MQ-Proxy). After the consumer end is connected with the middleware server, the message processing flow of the middleware server is as follows:
1. a message queue (amppconsumer queue) is created for each consumer, which is the target queue in the foregoing. A semaphore is created simultaneously and is the number of consumer instances (amonosumer, equivalent to the elements in the foregoing) in the message queue.
Specifically, after receiving the consumption request of the consumer end, the middleware server generates a consumer instance and puts the consumer instance into a message queue, and if the initial startup can be added 8 times, the initial semaphore is 8, which means that the consumer end can receive 8 consumption messages.
The middleware server creates a consumer wrapper instance (InnerConsumerwrapper) for each connected consumer (RocketMQ-Broker) message queue, which holds the message queue and the semaphore for the message queue.
2. Creating a consumption instance (pushConsumer) through a consumer packaging instance, and setting a message consumption service (CustomConsumeMessageConcurrentlyService) of a consumption end corresponding to the message queue, wherein the message pushing logic (pushMessage) of the consumption end corresponding to the consumption instance is packaged.
3. The consumption instance pulls a message from the production end, packages the pulled message into a message to be pushed (ConsumeRequest, a thread implementation class of the client consumption thread pool).
4. A cache queue (ConsumeRequest blocking queue) is created and the message to be pushed is placed into the cache queue.
5. And a unified push message thread (PushMessageToConsumerService), polling traversal processes and pushes all message consumption services corresponding to the consumption terminals connected with the middleware server, and invokes message push logic in the message consumption services.
6. The message pushing logic obtains the semaphore of the message queue corresponding to the consumer. The message push logic obtains the message queue semaphore by calling the tryqcquisre method.
7. And putting the message to be pushed in the corresponding cache queue into a consumption thread pool (ConsumeExecutor) according to the signal quantity.
And if the acquired semaphore is greater than zero, pulling a message to be pushed from a cache queue corresponding to the message queue, and submitting the message to a consumption thread pool for operation. If the consumption speed of the consumer is extremely high at this time, the signal quantity is always larger than zero, so that the unified message pushing thread can circularly acquire the signal quantity of the message queue corresponding to the consumer and push the message to be pushed, so that the consumer with high consumption speed can push as many messages as possible, but limit to push 1000 pieces at most, the value can be adjusted, and the signal quantity of the message queue of the next consumer can be acquired after the maximum pushing number is reached, thereby enabling the consumer to fully utilize the consumption capacity and quickly consume the message, and simultaneously avoiding the situation that the message cannot be acquired by other consumers due to the fact that the message is always pushed to one consumer; if the signal quantity of the message queue corresponding to the consumer is smaller than or equal to zero, which indicates that the consumption capability of the consumer is sufficient, and message pushing is not needed, directly polling to perform message consumption service of the next consumer, and returning to execute the step 5 to obtain the signal quantity of the message queue corresponding to the next consumer (namely, in the foregoing, under the condition that the number of push messages of the K-1 th consumer is not smaller than a preset threshold value, according to the K-th first queue preconfigured by the middleware server, determining the number of receivable messages of the K-th consumer, wherein the number of push messages is the number of messages pushed to the K-1 th consumer by the target thread pool, and in the process that the target thread pool pushes messages corresponding to the number of messages to the K-1 th consumer, the target thread pool does not push messages to the first consumer, and the first consumer is different from the target consumer).
8. And pushing the message to be pushed in the consumption thread pool to the consumption end.
The message to be pushed is pushed to the corresponding consumer side in the consuming thread pool by a method of running message consuming logic (condumemessage) in a message listener (custommessagelistenercurrently).
9. The corresponding method of the message consumption logic is operated, a consumer instance is obtained from a message queue through a consumer package instance, the signal quantity is reduced by 1, the self message pushing logic method (responseMessage) of a consumer end in the consumer instance is executed, the message pushing logic pushes a message to be pushed to the consumer end through a channel supporting an AMQP protocol (namely, in the foregoing, the target element is extracted from a target queue and is any element in the target queue, the target element is used for indicating a transmission channel between a middleware server and the target consumer end, and the target message in a target thread pool is pushed to the target consumer end, wherein the pushing of the target message in the target thread pool to the target consumer end through the transmission channel comprises the steps of pushing the target message in the target thread pool to the target consumer end.
10. After the consumer receives the service logic of the self-consuming message, the consumer sends a feedback message (ack) to the message middleware.
11. The message middleware receives the feedback information, submits the message consumption offset, generates a consumer instance, puts the consumer instance into the message queue (adds the consumer instance once), and adds 1 to the signal quantity (namely the feedback information sent by the target consumer is received in the previous text, the feedback information is used for indicating that the target consumer has completed consumption based on the target message, and an element is added in the target queue to update the target queue in response to the feedback information).
12. Logic for loops 3-11.
In this embodiment, unified message pushing thread service is used, and in combination with semaphores and cache queues, reasonable message quantity pushing can be performed when the number of pushable messages and consumable messages at a server is perceived, so that message pushing performance is improved, and thus the pushing requirement of massive messages can be completed by using a small amount of thread resources, invalid loss of machine cpu and memory resources is avoided, single-machine message pushing tps is improved, and machine resources are saved.
It should be noted that, in the message processing method applied to the middleware server provided in the embodiment of the present application, the execution body may be a message processing device applied to the middleware server. In the embodiment of the application, the message processing device applied to the middleware server executes the message processing method applied to the middleware server by taking the message processing device applied to the middleware server as an example, and the message processing device applied to the middleware server provided by the embodiment of the application is described.
Fig. 3 is a schematic structural diagram of a message processing apparatus applied to a first electronic device according to an embodiment of the present application. The message processing apparatus 300 applied to the first electronic device may include:
a first determining module 301, configured to determine, according to a target queue preconfigured by the middleware server, the number of messages receivable by the target consuming end, where the target queue is a queue of the target consuming end;
the first transmission module 302 is configured to, if the number of receivable messages at the target consuming end is greater than zero, buffer at least one message in a buffer queue preconfigured by the middleware server, and transmit the target message in the buffer queue to a target thread pool of the middleware server, where the buffer queue corresponds to the target queue, and the buffer queue is configured to buffer a message pulled by the middleware server from the target producing end; the target message is a message positioned at the head of the queue in the cache queue;
a first pushing module 303, configured to push the target message in the target thread pool to the target consumer.
According to the embodiment of the application, the number of the messages which can be received by the target consumption terminal can be determined according to the target queue which is preconfigured by the middleware server; if the number of receivable messages of the target consuming end is greater than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a target thread pool of the middleware server; and pushing the target message in the target thread pool to the target consumer. Based on the number of receivable messages of the target consumption end, the target thread pool is utilized to push the messages in the cache queue associated with the target queue to the target consumption end, so that the situation that the messages occupy the resources of the thread pool to cause blocking can be avoided under the condition that the target consumption end cannot receive the messages, the message pushing efficiency is improved, and the computing resources are saved.
In one embodiment, the number of elements in the target queue matches the number of messages that the target consumer can receive;
the message processing apparatus 300 further includes:
and the first updating module is used for reducing one element in the target queue after transmitting the target message in the cache queue to the target thread pool of the middleware server so as to update the target queue.
In one embodiment, the first update module includes:
the extraction unit is used for extracting a target element from the target queue, wherein the target element is any element in the target queue, and the target element is used for indicating a transmission channel between the middleware server and the target consumption end;
the first pushing module 303 includes:
and the pushing unit is used for pushing the target message in the target thread pool to the target consumption end through the transmission channel.
In one embodiment, the message processing apparatus 300 further includes:
the second receiving module is used for receiving a feedback message sent by the target consumption end, wherein the feedback message is used for indicating that the target consumption end completes consumption based on the target message;
and the second updating module is used for adding an element in the target queue to update the target queue in response to the feedback message.
In one embodiment, N first queues and N second queues are preconfigured in the intermediate server, the N first queues are queues of N consuming ends, the N first queues correspond to the N second queues, and N is a positive integer;
the first determining module 301 includes:
a first confirmation unit, configured to determine, according to a kth first queue preconfigured by the middleware server, the number of receivable messages at the kth consumer, in the case that the number of receivable messages at the kth-1 consumer is zero,
the target queue is the Kth first queue, the target consuming end is the Kth consuming end, the cache queue is the queue corresponding to the Kth first queue in the N second queues, and K is an integer greater than 1 and less than or equal to N.
In one embodiment, the first transmission module 302 includes:
the first transmission unit is configured to, if the number of receivable messages at the kth consumer is greater than zero, buffer at least one message in a buffer queue preconfigured by the middleware server, and transmit the target message in the buffer queue to a common thread pool of the middleware server, where the common thread pool is a target thread pool, and the common thread pool is configured to receive the messages transmitted by the N second queues.
In one embodiment, N first queues and N second queues are preconfigured in the intermediate server, the N first queues are queues of N consuming ends, the N first queues correspond to the N second queues, and N is a positive integer;
the first determining module 301 includes:
a second determining unit, configured to determine, according to a kth first queue preconfigured by the middleware server, the number of messages receivable by the kth consumer in a case that the number of push messages of the kth-1 consumer is not less than a preset threshold,
the number of the push messages is the number of the messages pushed to the Kth-1 consumer end by the target thread pool, and in the process of pushing the messages corresponding to the number of the push messages to the Kth-1 consumer end by the target thread pool, the push messages to the first consumer end do not occur in the target thread pool, and the first consumer end is different from the target consumer end.
In one embodiment, the message processing apparatus 300 further includes:
the first creating module is used for creating a target queue, wherein the target queue comprises a preset number of elements, and the elements are used for indicating a transmission channel between the middleware server and the target consumption end;
a second creation module for creating a semaphore based on the number of elements in the target queue, the semaphore indicating the number of messages receivable by the target consumer.
In one embodiment, the message processing apparatus 300 further includes:
a third creation module, configured to create a cache queue corresponding to the target queue;
the acquisition module is used for acquiring the production message from the target production end;
and the caching module is used for caching the production message to the cache queue.
The consumption end and the production end in the embodiment of the application can be terminals or other devices except the terminals. The first electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a mobile internet appliance (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), or the like, and may also be a network attached memory (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, or the like, which is not limited in the embodiment of the present application.
The first electronic device in the embodiment of the application may be an electronic device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The message processing device applied to the middleware server provided in the embodiment of the present application can implement each process in the embodiment of the message processing device applied to the middleware server shown in fig. 1 to 2, and in order to avoid repetition, the description is omitted here.
Optionally, as shown in fig. 4, the embodiment of the present application further provides an electronic device 400, including a processor 401 and a memory 402, where the memory 402 stores a program or an instruction that can be executed on the processor 401, and the program or the instruction implements each step of the above-mentioned message processing method embodiment applied to the middleware server when executed by the processor 401, and the steps can achieve the same technical effect, so that repetition is avoided and no redundant description is provided herein.
In one embodiment, processor 401 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits implementing embodiments of the present application.
In one embodiment, memory 402 may include Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk storage media devices, optical storage media devices, flash Memory devices, electrical, optical, or other physical/tangible Memory storage devices. Thus, in general, memory M02 comprises one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to a message processing method applied to a middleware server in accordance with an embodiment of the application.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 500 includes, but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, input unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, and processor 510.
Those skilled in the art will appreciate that the electronic device 500 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 510 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
If the electronic device 500 is a middleware server, wherein,
the processor 510 is configured to: determining the number of messages receivable by a target consuming end according to a target queue pre-configured by a middleware server, wherein the target queue is a queue of the target consuming end;
if the number of receivable messages of the target consuming end is greater than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a target thread pool of the middleware server, wherein the cache queue corresponds to the target queue and is used for caching the message pulled by the middleware server from the target producing end; the target message is a message positioned at the head of the queue in the cache queue;
Pushing the target information in the target thread pool to the target consumer.
According to the embodiment of the application, the number of the messages which can be received by the target consumption terminal can be determined according to the target queue which is preconfigured by the middleware server; if the number of receivable messages of the target consuming end is greater than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a target thread pool of the middleware server; and pushing the target message in the target thread pool to the target consumer. Based on the number of receivable messages of the target consumption end, the target thread pool is utilized to push the messages in the cache queue associated with the target queue to the target consumption end, so that the situation that the messages occupy the resources of the thread pool to cause blocking can be avoided under the condition that the target consumption end cannot receive the messages, the message pushing efficiency is improved, and the computing resources are saved.
In one embodiment, the number of elements in the target queue matches the number of messages that the target consumer can receive;
the processor 510 is further configured to: one element is reduced in the target queue to update the target queue.
In one embodiment, the processor 510 is further configured to: extracting a target element from a target queue, wherein the target element is any element in the target queue, and is used for indicating a transmission channel between a middleware server and a target consumption end;
and pushing the target information in the target thread pool to a target consumption end through a transmission channel.
In one embodiment, the processor 510 is further configured to: receiving a feedback message sent by a target consumption end, wherein the feedback message is used for indicating that the target consumption end finishes consumption based on the target message;
an element is added to the target queue in response to the feedback message to update the target queue.
In one embodiment, N first queues and N second queues are preconfigured in the intermediate server, the N first queues are queues of N consuming ends, the N first queues correspond to the N second queues, and N is a positive integer;
the processor 510 is further configured to:
in the case where the number of receivable messages at the kth-1 consumer is zero, determining the number of receivable messages at the kth consumer based on the first queue of the kth pre-configured middleware server,
the target queue is the Kth first queue, the target consuming end is the Kth consuming end, the cache queue is the queue corresponding to the Kth first queue in the N second queues, and K is an integer greater than 1 and less than or equal to N.
In one embodiment, the processor 510 is further configured to:
and if the number of receivable messages of the Kth consumer is greater than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a public thread pool of the middleware server, wherein the public thread pool is a target thread pool and is used for receiving the messages transmitted by N second queues.
In one embodiment, N first queues and N second queues are preconfigured in the intermediate server, the N first queues are queues of N consuming ends, the N first queues correspond to the N second queues, and N is a positive integer;
the processor 510 is further configured to: under the condition that the number of push messages of the Kth-1 consumer is not less than a preset threshold value, determining the number of receivable messages of the Kth consumer according to a Kth first queue preconfigured by a middleware server,
the number of the push messages is the number of the messages pushed to the Kth-1 consumer end by the target thread pool, and in the process of pushing the messages corresponding to the number of the messages to the Kth-1 consumer end by the target thread pool, the messages are not pushed to the first consumer end by the target thread pool, and the first consumer end is different from the target consumer end.
In one embodiment, the processor 510 is further configured to:
creating a target queue, wherein the target queue comprises a preset number of elements, and the elements are used for indicating a transmission channel between a middleware server and a target consumption end;
a semaphore is created based on the number of elements in the target queue, the semaphore indicating the number of messages that the target consumer may receive.
In one embodiment, the processor 510 is further configured to:
creating a cache queue corresponding to the target queue;
obtaining a production message from a target production end;
the production message is cached to a cache queue.
It should be appreciated that in embodiments of the present application, the input unit 504 may include a graphics processor (Graphics Processing Unit, GPU) 5041 and a microphone 5042, the graphics processor 5041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 506 may include a display panel 5061, and the display panel 5061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 507 includes at least one of a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen. Touch panel 5071 may include two parts, a touch detection device and a touch controller. Other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 509 may include volatile memory or nonvolatile memory, or the memory 509 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 510 may include one or more processing units; optionally, the processor 510 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 510.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements each process of the message processing method embodiment applied to the middleware server, and can achieve the same technical effect, so that repetition is avoided, and no redundant description is provided herein.
The processor is a processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium, and examples of the computer readable storage medium include a non-transitory computer readable storage medium such as ROM, RAM, magnetic disk, or optical disk.
The embodiment of the application also provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the message processing method embodiment applied to the middleware server, and can achieve the same technical effect, so that repetition is avoided, and the description is omitted.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the message processing method embodiment applied to a middleware server as described above, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (11)

1. A message processing method, applied to a middleware server, the method comprising:
determining the number of messages which can be received by a target consumption end according to a target queue pre-configured by the middleware server, wherein the target queue is a queue of the target consumption end;
if the number of receivable messages of the target consuming end is greater than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a target thread pool of the middleware server, wherein the cache queue corresponds to the target queue and is used for caching the message pulled by the middleware server from the target producing end; the target message is a message positioned at the head of a queue in the cache queue;
pushing the target information in the target thread pool to the target consumption end.
2. The method of claim 1, wherein the number of elements in the target queue matches the number of messages receivable by the target consumer;
after the target message in the buffer queue is transmitted to the target thread pool of the middleware server, the method further comprises:
One element is reduced in the target queue to update the target queue.
3. The method of claim 2, wherein said reducing an element in said target queue comprises:
extracting a target element from the target queue, wherein the target element is any element in the target queue and is used for indicating a transmission channel between the middleware server and the target consumption end;
pushing the target message in the target thread pool to the target consumer, including:
pushing the target information in the target thread pool to the target consumption end through the transmission channel.
4. The method of claim 2, wherein after pushing the target message in the target thread pool to the target consumer, further comprising:
receiving a feedback message sent by the target consumption end, wherein the feedback message is used for indicating that the target consumption end finishes consumption based on the target message;
an element is added to the target queue in response to the feedback message to update the target queue.
5. The method of claim 1, wherein N first queues and N second queues are preconfigured in the intermediate server, the N first queues are queues corresponding to N consumption ends respectively, the N first queues correspond to the N second queues, and N is a positive integer;
The determining the number of the messages receivable by the target consumer according to the target queue preconfigured by the middleware server comprises the following steps:
in the case where the number of receivable messages at the kth-1 th consumer is zero, determining the number of receivable messages at the kth consumer according to a kth first queue preconfigured by the middleware server,
the target queue is the Kth first queue, the target consuming end is the Kth consuming end, the cache queue is a queue corresponding to the Kth first queue in the N second queues, and K is an integer greater than 1 and less than or equal to N.
6. The method according to claim 5, wherein, in the case that the number of receivable messages at the target consuming end is greater than zero, if at least one message is cached in a cache queue configured in advance by the middleware server, transmitting the target message in the cache queue to the target thread pool of the middleware server includes:
and if the number of receivable messages of the Kth consumer is greater than zero, if at least one message is cached in a cache queue pre-configured by the middleware server, transmitting the target message in the cache queue to a public thread pool of the middleware server, wherein the public thread pool is the target thread pool, and the public thread pool is used for receiving the messages transmitted by the N second queues.
7. The method of claim 1, wherein N first queues and N second queues are preconfigured in the intermediate server, the N first queues are queues corresponding to N consumption ends respectively, the N first queues correspond to the N second queues, and N is a positive integer;
the determining the number of the messages receivable by the target consumer according to the target queue preconfigured by the middleware server comprises the following steps:
under the condition that the number of push messages of the Kth-1 consumer is not smaller than a preset threshold value, determining the number of receivable messages of the Kth consumer according to a Kth first queue preconfigured by the middleware server,
the number of the push messages is the number of the messages pushed to the Kth-1 consumer end by the target thread pool, and in the process that the target thread pool pushes the messages corresponding to the number of the push messages to the Kth-1 consumer end, the target thread pool does not push the messages to a first consumer end, and the first consumer end is different from the Kth-1 consumer end.
8. The method of claim 1, wherein before determining the number of messages receivable by the target consumer based on the target queue preconfigured by the middleware server, further comprises:
Creating a target queue, wherein the target queue comprises a preset number of elements, and the elements are used for indicating a transmission channel between the middleware server and the target consumption end;
a semaphore is created based on the number of elements in the target queue, the semaphore indicating the number of messages that can be received by the target consumer.
9. The method of claim 8, wherein before determining the number of messages receivable by the target consumer based on the target queue preconfigured by the middleware server, further comprises:
creating a cache queue corresponding to the target queue;
obtaining a production message from the target production end;
and caching the production message to the cache queue.
10. A message processing apparatus for use with a middleware server, the apparatus comprising:
the first determining module is used for determining the number of messages which can be received by a target consuming end according to a target queue which is preconfigured by the middleware server, wherein the target queue is a queue of the target consuming end;
the first transmission module is used for transmitting the target message in the cache queue to a target thread pool of the middleware server if at least one message is cached in the cache queue pre-configured by the middleware server under the condition that the number of the receivable messages of the target consuming end is larger than zero, wherein the cache queue corresponds to the target queue and is used for caching the message pulled by the middleware server from the target production end; the target message is a message positioned at the head of a queue in the cache queue;
And the first pushing module is used for pushing the target information in the target thread pool to the target consumption end.
11. An electronic device, the electronic device comprising: a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the message processing method of any of claims 1 to 9.
CN202310512131.7A 2023-05-08 2023-05-08 Message processing method, device and equipment Pending CN117170891A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310512131.7A CN117170891A (en) 2023-05-08 2023-05-08 Message processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310512131.7A CN117170891A (en) 2023-05-08 2023-05-08 Message processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN117170891A true CN117170891A (en) 2023-12-05

Family

ID=88934222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310512131.7A Pending CN117170891A (en) 2023-05-08 2023-05-08 Message processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN117170891A (en)

Similar Documents

Publication Publication Date Title
EP2866420B1 (en) Method and device for content synchronization
US8949258B2 (en) Techniques to manage file conversions
CN102185901B (en) Client message conversion method
US8819698B2 (en) Cross-platform web-based native device feature access
US8751587B2 (en) Real-time web applications
US8689234B2 (en) Providing real-time widgets in a web application framework
US8301718B2 (en) Architecture, system and method for a messaging hub in a real-time web application framework
US8683357B2 (en) Providing real time web application framework socket
US20130219009A1 (en) Scalable data feed system
CN111212085A (en) Internet of things platform synchronous calling method, Internet of things system and network equipment
US8180828B2 (en) Architecture, system and method for providing a plug-in architecture in a real-time web application framework
CN110413822B (en) Offline image structured analysis method, device and system and storage medium
CN113162848B (en) Method, device, gateway and medium for realizing block chain gateway
KR20150037943A (en) Cloud process management
CN111200606A (en) Deep learning model task processing method, system, server and storage medium
CN115408715A (en) Heterogeneous data processing system, method and equipment based on block chain and IPFS
KR20090123012A (en) Distributed processing system and method
CN109885347B (en) Method, device, terminal, system and storage medium for acquiring configuration data
CN112653614A (en) Request processing method and device based on message middleware
CN112711485A (en) Message processing method and device
CN115599571A (en) Data processing method and device, electronic equipment and storage medium
CN117170891A (en) Message processing method, device and equipment
US20200280597A1 (en) Transmitting data over a network in representational state transfer (rest) applications
CN111770043A (en) Game data communication method, device, storage medium and electronic equipment
CN112769788B (en) Charging service data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination