CN115766610A - Message queue based on publish-subscribe - Google Patents

Message queue based on publish-subscribe Download PDF

Info

Publication number
CN115766610A
CN115766610A CN202211320387.XA CN202211320387A CN115766610A CN 115766610 A CN115766610 A CN 115766610A CN 202211320387 A CN202211320387 A CN 202211320387A CN 115766610 A CN115766610 A CN 115766610A
Authority
CN
China
Prior art keywords
message
queue
data
subject
message queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211320387.XA
Other languages
Chinese (zh)
Inventor
吕广喆
任晓瑞
邸海涛
甄超
李康
齐舸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Aeronautics Computing Technique Research Institute of AVIC
Original Assignee
Xian Aeronautics Computing Technique Research Institute of AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Aeronautics Computing Technique Research Institute of AVIC filed Critical Xian Aeronautics Computing Technique Research Institute of AVIC
Priority to CN202211320387.XA priority Critical patent/CN115766610A/en
Publication of CN115766610A publication Critical patent/CN115766610A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention belongs to the technical field of computer system software, and particularly relates to a message queue based on publish-subscribe. The invention establishes message routing relation through theme, designs and realizes queue buffering to persist the message, ensures the consistency of all receiving end messages, supports high concurrent access of large data volume, realizes decoupling of the sending end and the receiving end, and improves the performance of a distributed system. The invention realizes the message queue, has the capabilities of message persistence, multi-node concurrency and asynchronous communication, and meets the communication requirement of a distributed system.

Description

Message queue based on publish-subscribe
Technical Field
The invention belongs to the technical field of computer system software, and particularly relates to a message queue based on publish-subscribe.
Background
In a distributed system environment, there are a variety of communication needs, such as: the online process among the communication nodes has precedence, but in order to ensure the consistency of the states among the nodes, the message persistence needs to be supported; in a many-to-one communication scene, a plurality of sending ends send messages to one receiving end at the same time, and because the throughput of the receiving end is insufficient, the messages are possibly lost, and high concurrent access needs to be supported; after the sending end sends out the message, the receiving end needs to perform some series of processing to return the result to the receiving end, and at this time, in order to avoid long-term blocking of the sending end and low program efficiency, asynchronous communication needs to be supported.
Disclosure of Invention
In view of this, the present invention provides a message queue based on publish-subscribe, which establishes a message routing relationship through a topic, designs and implements queue buffering to persist a message, ensures consistency of messages of all receiving ends, supports high concurrent access of a large data volume, implements decoupling of a sending end and a receiving end, and enhances performance of a distributed system. The invention realizes the message queue, has the capabilities of message persistence, multi-node concurrency and asynchronous communication, and meets the communication requirement of a distributed system.
In order to achieve the technical purpose, the invention adopts the following specific technical scheme:
a message queue based on publish-subscribe, the publish-subscribe objects including participants, publishers, subscribers, data readers, and data writers; the message queue is used for forwarding the subject message from the sending end to the receiving end;
the sending end comprises a participant, a publisher and a data writer;
the subject message is created by a data writer, the data writer is used for generating a type of subject message; one said publisher manages a plurality of data writers; one participant manages a plurality of publishers; the participant is used for initializing the node of the sending end and establishing network connection with the message queue;
the message queue is used for receiving the subject message, establishing different queues according to the type of the subject message, storing the subject message according to the queues and distributing the subject message to each data reader according to the requirement of the data reader.
Further, the message queue stores the theme message in a manner that:
classifying and storing the theme messages according to theme types, adopting an annular queue to store data, or dividing the annular queue into a plurality of partitions, storing the theme messages of one type in the plurality of partitions according to key attributes, and enabling the theme messages of the same type and different key values to realize parallel processing;
wherein:
the message queue maintains the index of data to be read of each data reader, and updates the index value after forwarding the data; the subject message in the message queue supports single reading or repeated reading; the size of the circular queue is configurable;
when a ring queue is full, the ring queue performs message coverage or transmission blocking; the transmission blocking specifically includes: the message queue returns a message transmission failure return value to the sending end; the transmission coverage is specifically: overlaying the oldest theme message in the circular queue with a new theme message;
the receiving end comprises a participant, a subscriber and a data reader;
the subject message is forwarded by a message queue; the data reader is used for receiving a type of subject message; one said subscriber manages a plurality of data readers; one said participant manages a plurality of subscribers; the participants are used for initializing the nodes of the receiving end and establishing network connection with the message queue.
Further, the storing comprises caching;
and when the message queue caches the subject message, reading a content part of the subject message and caching the content part.
Further, the storing further comprises persistent storing; the persistent storage executes the content portions in batches by topic;
the storage elements of the persistent storage also include indexes of messages to be read by the data readers at the receiving end in the circular queue.
Further, in the persistent storage, the storage is executed in batches in the following manner: a batch of storage is performed when the number of subject messages for each subject reaches 1000.
Further, when the data readers are newly added to the receiving end, the message queue pushes the historical subject message of the subject corresponding to the newly added data readers at one time.
Further, the transmission method of the message queue comprises:
step 1, determining configuration data, a used network type and network parameter information of a sending end, a message queue and a receiving end; the configuration data comprises the type of the subject message and service quality information;
step 2, creating and starting message queues, producer and consumer objects at different nodes;
step 3, establishing a connection routing relationship among the message queue, the producer and the consumer through message handshake;
step 4, the producer sends a theme message;
step 5, the data readers in the message queue receive the subject messages from the producer and store the subject messages in the memory or the disk according to the data sub-queue;
step 6, the data writer in the message queue forwards the subject message stored in the memory or the disk to the consumer;
and 7, receiving the subject message by the consumer.
Further, the step 5 specifically includes: data readers in the message queue receive the subject message from the sending end, judge whether the queue to which the subject message belongs has space, and if yes, store the message in the message queue; if not, processing according to a queue full-time message processing mode configured by the message queue; if the transmission blocking mode is adopted, discarding the received message, and returning a return value of transmission failure to the sending end; if the transmission coverage mode is adopted, the oldest theme message in the queue is covered by a new theme message.
Further, the step 6 specifically includes: and the message queue judges whether the message format in the queue receiving the new message reaches a threshold value and sets persistent storage, if so, the message is stored on a disk, otherwise, the message is not processed.
Further, the step 7 specifically includes: the data writer in the message queue forwards the message stored in the memory or the disk to the consumer, the message is processed according to the message reading mode configured by the message queue, and if the message is read once, the data writer forwards the message to a first data reader establishing connection and deletes the subject message in the queue; if the data is the repeated reading, the data writer deletes the subject message in the queue after forwarding the message to all receiving ends.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of message queue message transmission in an embodiment of the present invention;
FIG. 2 is a schematic diagram of message persistence in an embodiment of the present invention;
fig. 3 is a schematic diagram of high concurrency in an embodiment of the present invention.
Detailed Description
The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure of the present disclosure. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without inventive step, are intended to be within the scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be further noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than being drawn according to the number, shape and size of the components in actual implementation, and the type, number and proportion of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
In one embodiment of the present invention, a message queue based on publish-subscribe is provided, where the objects of the publish-subscribe include participants, publishers, subscribers, data readers, and data writers; the message queue is used for forwarding the subject message from the sending end to the receiving end;
the sending end comprises a participant, a publisher and a data writer;
the subject message is created by a data writer, the data writer is used for generating a type of subject message; one said publisher manages a plurality of data writers; one participant manages a plurality of publishers; the participant is used for initializing the node of the sending end and establishing network connection with the message queue;
the message queue is used for receiving the subject message, establishing different queues according to the type of the subject message, storing the subject message according to the queues and distributing the subject message to each data reader according to the requirement of the data reader.
In this embodiment, the message queue stores the theme message in a manner that:
classifying and storing the theme messages according to theme types, adopting an annular queue to store data, or dividing the annular queue into a plurality of partitions, storing the theme messages of one type in the plurality of partitions according to key attributes, and enabling the theme messages of the same type and different key values to realize parallel processing;
wherein:
the message queue maintains the index of data to be read of each data reader, and updates the index value after the message queue forwards the data; the subject message in the message queue supports single reading or repeated reading; the size of the circular queue is configurable;
when a ring queue is full, the ring queue performs message coverage or transmission blocking; the transmission blocking specifically includes: the message queue returns a message transmission failure return value to the sending end; the transmission coverage is specifically: overlaying the oldest theme message in the circular queue with a new theme message;
the receiving end comprises a participant, a subscriber and a data reader;
the subject message is forwarded by a message queue; the data reader is used for receiving a type of subject message; one said subscriber manages a plurality of data readers; one said participant manages a plurality of subscribers; the participants are used for initializing the nodes of the receiving end and establishing network connection with the message queue.
In this embodiment, the storing includes caching;
and when the message queue caches the subject message, reading a content part of the subject message and caching the content part.
In this embodiment, the storage further comprises persistent storage; the persistent storage executes the content portions in batches by topic;
the storage elements of the persistent storage also include indexes of messages to be read by the data readers at the receiving end in the circular queue.
In this embodiment, in the persistent storage, the storage is executed in batches in the following manner: a batch of storage is performed when there are up to 1000 topic messages per topic.
In this embodiment, when the data readers are newly added to the receiving end, the message queue pushes the historical topic message of the topic corresponding to the newly added data readers at one time.
In this embodiment, as shown in fig. 1, the transmission method of the message queue includes:
step 1, determining configuration data, a used network type and network parameter information of a sending end, a message queue and a receiving end; the configuration data comprises the type of the subject message and service quality information;
step 2, creating and starting message queues, producer and consumer objects at different nodes;
step 3, establishing a connection routing relationship among the message queue, the producer and the consumer through message handshake;
step 4, the producer sends the theme message;
step 5, the data readers in the message queue receive the subject messages from the producer and store the subject messages in a memory or a disk according to the data sub-queues;
step 6, the data writer in the message queue forwards the subject message stored in the memory or the disk to the consumer;
and 7, receiving the subject message by the consumer.
In this embodiment, the step 5 specifically includes: a data reader in the message queue receives a subject message from a sending end, judges whether the queue to which the subject message belongs has space, and stores the message in the message queue if the queue to which the subject message belongs has space; if not, processing according to a queue full-time message processing mode configured by the message queue; if the transmission blocking mode is adopted, discarding the received message, and returning a return value of transmission failure to the sending end; and if the transmission covering mode is adopted, covering the oldest theme message in the queue by using a new theme message.
In this embodiment, the step 6 specifically includes: and the message queue judges whether the message format in the queue receiving the new message reaches a threshold value and sets persistent storage, if so, the message is stored on a disk, and otherwise, the message is not processed.
In this embodiment, the step 7 specifically includes: the data writer in the message queue forwards the message stored in the memory or the disk to the consumer, and processes the message according to the message reading mode configured by the message queue, and if the message is read once, the data writer forwards the message to a first data reader establishing connection and deletes the subject message in the queue; if the data is the repeated reading, the data writer deletes the subject message in the queue after forwarding the message to all receiving ends.
This embodiment is implemented based on participants, publishers, subscribers, data readers, and data writers in the publish-subscribe.
The message queue manages messages in two ways: 1. establishing a queue for the subject messages; 2. when the number of the topic messages of one class is larger, dividing the topic messages according to the key values of the topics, and establishing a plurality of queues. Each queue corresponds to a data reader and a data writer, the data reader receives data from a producer, and the data writer sends data to a consumer.
The sending end is realized based on participants, a publisher and a data writer. The receiving end is realized based on participants, subscribers and data readers. The message routing relationship establishment is realized by handshaking messages between participants.
The sending end, the receiving end and the message queue all contain configuration data. The configuration data of the sending end and the receiving end comprises the used network type and the network parameter information; the configuration data of the message queue comprises the type of the subject message to be forwarded, the service quality, the type of the used network and the network parameter information.
The service quality specifically includes a message reading mode (single reading and repeated reading), a message storage mode (caching and persistent storage), a queue size, and a message processing mode (transmission blocking and transmission covering) when the queue is full.
The single reading means that a data writer corresponding to a certain subject message queue forwards the subject message to a first data reader establishing connection and then deletes the message in the buffer area, so that only one consumer can read the data; the repeated reading means that after a data writer corresponding to a certain subject message queue reads a forwarded message, the message in the buffer area is still reserved, a plurality of consumers can read the message, and the data is deleted after the message is forwarded to all data readers.
Caching means caching the subject message in a memory; persistent storage refers to storing the theme messages in a disk, the persistent storage adopts a batch mode, when the number of the messages reaches a set threshold value, a batch of persistent storage is performed, and a message persistent schematic diagram is shown in fig. 2.
The transmission blocking means that when a certain subject message queue in the message queue is full, the sending end sends a message, and the message queue returns a message transmission failure return value to the sending end; the transmission covering means that when a certain theme message queue in the message queue is full, a sending end sends a message, and the message queue covers the earliest theme message in the queue with a new theme message.
In the present embodiment, in a one-to-many communication model. The transmitting end communicates with the receiving end 1, the receiving end 2, and the receiving end 3. The receiving end 1 and the receiving end 3 establish the connection with the sending end by online first. The sending end sends messages msg1, msg2 and msg3, the receiving end 1 and the receiving end 3 receive the messages, at this time, the receiving end 2 is on line, and the sending end pushes the history messages msg1, msg2 and msg3 stored persistently to the receiving end 2.
In this embodiment, in a many-to-one communication model. As shown in fig. 3, the receiving side communicates with the transmitting side 1, the transmitting side 2, and the transmitting side 3. The frequency of sending messages by the sending end 1, the sending end 2 and the sending end 3 is 2 messages per millisecond, the receiving frequency of the receiving end is 3 messages per millisecond, message packet loss can be caused during direct communication, and the throughput pressure caused by high concurrency can be effectively relieved after the message queue is introduced.
The embodiment has the advantages that:
the embodiment effectively supports message distribution among distributed systems, has message persistence capability, can effectively ensure the consistency of messages received by a receiving end, and improves the reliability of the system.
The embodiment effectively supports asynchronous communication, so that the application programs of a producer and a consumer are effectively decoupled, and the flexibility of the distributed system is improved.
The embodiment effectively supports a high-concurrency communication scene, effectively reduces the throughput pressure of the message client, and improves the stability of the system.
The message queue buffer management mechanism of the embodiment includes:
1. classifying and storing the theme messages according to the theme types, and storing data by adopting an annular queue, wherein the messages are firstly input and firstly output;
2. a queue partition can be established for the theme, and theme data of one type is stored in a plurality of partitions according to the keyword attribute key, so that the data of the same type and different key values are processed in parallel;
3. the message queue maintains an index of data to be read of each consumer, and updates an index value after forwarding the data;
4. the data message can support single reading or repeated reading, and the single reading refers to deleting the data in the buffer area after the data is read, so that only one consumer can read the data; repeated reading means that data in a buffer area is still reserved after reading, a plurality of consumers can read messages, and the data is deleted after the messages are forwarded to all data readers;
5. the size of the buffer area is specified in a configuration mode, when the message queue is full, a new message cannot be inserted, and failure can be returned to the sending end;
6. when the data is stored in the magnetic disk, the data information and the index data of each data reader need to be recorded in the magnetic disk simultaneously.
The following describes a message queue implementation method based on publish-subscribe with reference to a UDP network, including the following steps:
1) Determining that a network used by a sending end, a message queue and a receiving end is UDP, a communication subject is A, and the depth of historical data of the specified message queue is N;
2) Creating and starting a message queue, a sending end object and a receiving end object;
3) Handshaking is carried out between a sending end participant and a message queue participant object through heartbeat messages, a theme sent by a data writer is consistent with a theme subscribed by a data reader, the service quality is matched, a routing relation is established, and otherwise the connection establishment fails;
4) Handshaking is carried out between a receiving end participant and a message queue participant object through heartbeat messages, a theme subscribed by a receiving end data reader is consistent with a theme sent by a data writer of a message queue, and the service quality is matched, a routing relation is established, otherwise, the connection establishment fails;
5) 4, if the connection is established successfully in the two steps 5 and 4, entering to be capable of receiving and transmitting messages; otherwise, waiting until the connection is established successfully;
6) A sending end sends a subject message through a data writer;
7) Data readers in the message queue receive the subject message from the sending end, judge whether the queue to which the subject message belongs has space, and if yes, store the message in the message queue; if not, processing according to a message processing mode when a queue configured by the message queue is full; if the transmission blocking mode is adopted, discarding the received message, and returning a return value of transmission failure to the sending end; if the mode is the transmission covering mode, covering the oldest theme message in the queue with a new theme message;
8) The message queue judges whether the message format in the queue receiving the new message reaches a threshold value and sets the persistent storage, if the number of the messages reaches the threshold value and sets the persistent storage, the messages are stored on a disk, otherwise, the messages are not processed;
9) The data writer in the message queue forwards the message stored in the queue to a receiving end, the message is processed according to a message reading mode configured in the message queue, and if the message is read once, the data writer forwards the message to a first data reader establishing connection and deletes the subject message in the queue; if the data is repeatedly read, deleting the subject message in the queue after the data writer forwards the message to all receiving ends;
10 Receive messages through the data reader;
the above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A message queue based on publish-subscribe, characterized in that the objects of publish-subscribe include participants, publishers, subscribers, data readers and data writers; the message queue is used for forwarding the subject message from the sending end to the receiving end;
the sending end comprises a participant, a publisher and a data writer;
the subject message is created by a data writer, the data writer is used for generating a type of subject message; one said publisher manages a plurality of data writers; one participant manages a plurality of publishers; the participant is used for initializing the node of the sending end and establishing network connection with the message queue;
the message queue is used for receiving the theme message, establishing different queues according to the type of the theme message, storing the theme message according to the queues and distributing the theme message to data readers according to the requirements of the data readers.
2. The publish-subscribe based message queue of claim 1, wherein the message queue stores the topic message by:
classifying and storing the theme messages according to theme types, adopting an annular queue to store data, or dividing the annular queue into a plurality of partitions, storing the theme messages of one type in the plurality of partitions according to key attributes, and enabling the theme messages of the same type and different key values to realize parallel processing;
wherein:
the message queue maintains the index of data to be read of each data reader, and updates the index value after the message queue forwards the data; the subject message in the message queue supports single reading or repeated reading; the size of the circular queue is configurable;
when a ring queue is full, the ring queue performs message coverage or transmission blocking; the transmission block is specifically: the message queue returns a message transmission failure return value to the sending end; the transmission coverage is specifically: overlaying the oldest theme message in the circular queue with a new theme message;
the receiving end comprises a participant, a subscriber and a data reader;
the subject message is forwarded by a message queue; the data reader is used for receiving a type of subject message; one said subscriber manages a plurality of data readers; one said participant manages a plurality of subscribers; the participants are used for initializing the nodes of the receiving end and establishing network connection with the message queue.
3. The publish-subscription based message queue of claim 2, wherein the storing comprises caching;
and when the message queue caches the subject message, reading a content part of the subject message and caching the content part.
4. The publish-subscription based message queue of claim 3, wherein the storing further comprises a persistent store; the persistent storage executes the storage of the content portions in batches by topic;
the storage elements of the persistent storage also comprise indexes of messages to be read by data readers at the receiving end in the circular queue.
5. The publish-subscription-based message queue according to claim 4, wherein the persistent storage is implemented in batches by: a batch of storage is performed when the number of subject messages for each subject reaches 1000.
6. The subscription-based message queue of claim 5, wherein when the data readers are newly added by the receiving end, the message queue pushes historical topic messages of topics corresponding to the newly added data readers at one time.
7. The publish-subscribe based message queue of claim 6, wherein the message queue is transmitted by:
step 1, determining configuration data, a used network type and network parameter information of a sending end, a message queue and a receiving end; the configuration data comprises the type of the subject message and service quality information;
step 2, creating and starting message queues, producer and consumer objects at different nodes;
step 3, establishing a connection routing relationship among the message queue, the producer and the consumer through message handshake;
step 4, the producer sends the theme message;
step 5, the data readers in the message queue receive the subject messages from the producer and store the subject messages in the memory or the disk according to the data sub-queue;
step 6, the data writer in the message queue forwards the subject message stored in the memory or the disk to the consumer;
and 7, receiving the subject message by the consumer.
8. The publish-subscribe-based message queue according to claim 7, wherein the step 5 specifically is: data readers in the message queue receive the subject message from the sending end, judge whether the queue to which the subject message belongs has space, and if yes, store the message in the message queue; if not, processing according to a queue full-time message processing mode configured by the message queue; if the transmission blocking mode is adopted, discarding the received message, and returning a return value of transmission failure to the sending end; and if the transmission covering mode is adopted, covering the oldest theme message in the queue by using a new theme message.
9. The publish-subscribe based message queue according to claim 8, wherein the step 6 specifically is: and the message queue judges whether the message format in the queue receiving the new message reaches a threshold value and sets persistent storage, if so, the message is stored on a disk, otherwise, the message is not processed.
10. The publish-subscribe based message queue according to claim 9, wherein the step 7 specifically is: the data writer in the message queue forwards the message stored in the memory or the disk to the consumer, the message is processed according to the message reading mode configured by the message queue, and if the message is read once, the data writer forwards the message to a first data reader establishing connection and deletes the subject message in the queue; if the data is the repeated reading, the data writer deletes the subject message in the queue after forwarding the message to all receiving ends.
CN202211320387.XA 2022-10-26 2022-10-26 Message queue based on publish-subscribe Pending CN115766610A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211320387.XA CN115766610A (en) 2022-10-26 2022-10-26 Message queue based on publish-subscribe

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211320387.XA CN115766610A (en) 2022-10-26 2022-10-26 Message queue based on publish-subscribe

Publications (1)

Publication Number Publication Date
CN115766610A true CN115766610A (en) 2023-03-07

Family

ID=85353411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211320387.XA Pending CN115766610A (en) 2022-10-26 2022-10-26 Message queue based on publish-subscribe

Country Status (1)

Country Link
CN (1) CN115766610A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116755637A (en) * 2023-08-17 2023-09-15 深圳华锐分布式技术股份有限公司 Transaction data storage method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116755637A (en) * 2023-08-17 2023-09-15 深圳华锐分布式技术股份有限公司 Transaction data storage method, device, equipment and medium
CN116755637B (en) * 2023-08-17 2024-02-09 深圳华锐分布式技术股份有限公司 Transaction data storage method, device, equipment and medium

Similar Documents

Publication Publication Date Title
US9832275B2 (en) Message processing method, device and system for internet of things
CN108881354B (en) Push information storage method and device, server and computer storage medium
CN1992686B (en) System and process for delivery status notification
CN105338086A (en) Distributed message forwarding method
CN101163117B (en) Packet management method, packet resource sharing method and instant communication equipment
CN107613529B (en) Message processing method and base station
CN101667976B (en) Method for determining mail push mode, pushing method, pushing device
CN113391979A (en) Processing method, equipment and system for monitoring data display and storage medium
CN115766610A (en) Message queue based on publish-subscribe
CN103326925B (en) A kind of information push method and device
CN105141603A (en) Communication data transmission method and system
CN114500633B (en) Data forwarding method, related device, program product and data transmission system
CN113157465B (en) Message sending method and device based on pointer linked list
CN100359891C (en) Method for improving multimedia message central service processing property by buffer storage
US7680112B2 (en) Peer-to-peer communication system
CN113301558B (en) Message transmission method, device, system and storage medium
US20220321513A1 (en) Method for chatting messages by topic based on subscription channel reference in server and user device
CN113641604B (en) Data transmission method and system
CN105912477B (en) A kind of method, apparatus and system that catalogue is read
CN110069506A (en) Maintaining method, device and the server of business datum
CN100454908C (en) Instant message service processing method and service system
CN102521030B (en) Online application program remote execution method and system
CN111726280A (en) Instant message transmission method and device, electronic equipment and storage medium
US8255505B2 (en) System for intelligent context-based adjustments of coordination and communication between multiple mobile hosts engaging in services
CN113037816B (en) Communication method, storage medium and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination