CN114363269A - Message transmission method, system, equipment and medium - Google Patents

Message transmission method, system, equipment and medium Download PDF

Info

Publication number
CN114363269A
CN114363269A CN202111065324.XA CN202111065324A CN114363269A CN 114363269 A CN114363269 A CN 114363269A CN 202111065324 A CN202111065324 A CN 202111065324A CN 114363269 A CN114363269 A CN 114363269A
Authority
CN
China
Prior art keywords
message
message transmission
queue
module
protocol packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111065324.XA
Other languages
Chinese (zh)
Other versions
CN114363269B (en
Inventor
张珠玉
张璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202111065324.XA priority Critical patent/CN114363269B/en
Publication of CN114363269A publication Critical patent/CN114363269A/en
Application granted granted Critical
Publication of CN114363269B publication Critical patent/CN114363269B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a message transmission method, which comprises the following steps: in response to receiving the message transmission request, putting the message to be transmitted into a message transmission queue corresponding to the module generating the request; acquiring port resources and a channel; dequeuing a plurality of messages to be transmitted in the information transmission queue in response to the obtained port resources and the obtained channel meeting a dequeue condition; packing a plurality of messages to be transmitted and sequence numbers corresponding to the message transmission queues into a protocol packet; and transmitting the protocol packet to a destination end through an interface. The invention also discloses a system, a computer device and a readable storage medium. The invention improves the layered message transmission mechanism, expands the layered message transmission into multi-queue message transmission, breaks through the limitation of processing messages by one receiving and sending queue, expands the layered message transmission mechanism into parallel receiving and sending messages according to a plurality of module queues, and meets the requirement of high concurrency performance of multiple IO of a storage system by matching with the message queue serial number without damaging the high availability of the storage system.

Description

Message transmission method, system, equipment and medium
Technical Field
The present invention relates to the field of information transmission, and in particular, to a method, a system, a device, and a storage medium for transmitting a message.
Background
In a storage system, in order to achieve high reliability, a group of mutually independent storage controllers are combined to form a cluster, the cluster serves as a single system, and each storage controller in the cluster is a node. Nodes in one cluster may communicate not only with other nodes in the cluster, but also with nodes in other clusters of the local area network. With the development of internet services, the amount of customer services increases, the amount of message transmission between nodes increases dramatically, and the original single-queue communication mechanism based on the SCSI protocol is gradually identified as a major bottleneck restricting the improvement of IO performance.
Only one pair of message receiving and transmitting queues is arranged below nodes contained in the current window layer WL, and no matter the traffic layer CL has a plurality of physical links, transmitted messages are cached in the message receiving and transmitting queues, and a group of global sequence numbers sqn is maintained to ensure stable and reliable message transmission. Due to the limitation of hardware conditions and the support of a work thread scheduling matrix, a single message receiving and transmitting queue perfectly completes a message transmission task and is not a system performance bottleneck. However, as hardware conditions are improved, new technologies are developed, communication traffic among cluster nodes of the storage system is increased rapidly, and the performance of the elbow system is gradually improved by a processing mechanism for receiving the single message receiving and sending queue in sequence.
Disclosure of Invention
In view of the above, in order to overcome at least one aspect of the above problems, the present invention provides a message transmission method, including the steps of:
in response to receiving a message transmission request, placing a message to be transmitted in a message transmission queue corresponding to a module generating the request;
acquiring port resources and a channel;
dequeuing a plurality of messages to be transmitted in the information transmission queue in response to the acquired port resources and the acquired channel meeting a dequeue condition;
packing the messages to be transmitted and the sequence numbers corresponding to the message transmission queues into a protocol packet;
and transmitting the protocol packet to a destination end through an interface.
In some embodiments, further comprising:
a separate CPU core is allocated for each module.
In some embodiments, further comprising:
and allocating different priorities to the message transmission queues corresponding to different modules so as to determine the sequence of acquiring the port resources and the channels according to the priorities.
In some embodiments, further comprising:
and responding to the receiving of the protocol packet by the destination, analyzing the protocol packet and putting the message in the protocol packet into a corresponding receiving queue according to the sequence number carried in the protocol packet.
In some embodiments, further comprising:
and utilizing a single CPU core to analyze the protocol packet.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a message transmission system, including:
the receiving module is configured to respond to the received message transmission request and put the message to be transmitted into a message transmission queue corresponding to the module generating the request;
an acquisition module configured to acquire port resources and channels;
the dequeuing module is configured to dequeue a plurality of messages to be transmitted in the information transmission queue in response to the acquired port resources and the acquired channel meeting a dequeuing condition;
the packing module is configured to pack the messages to be transmitted and the sequence numbers corresponding to the message transmission queues into a protocol package together;
and the transmission module is configured to transmit the protocol packet to a destination end through an interface.
In some embodiments, further comprising a first assignment module configured to:
a separate CPU core is allocated for each module.
In some embodiments, further comprising a second assignment module configured to:
and allocating different priorities to the message transmission queues corresponding to different modules so as to determine the sequence of acquiring the port resources and the channels according to the priorities.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer apparatus, including:
at least one processor; and
a memory storing a computer program operable on the processor, wherein the processor executes the program to perform the steps of:
in response to receiving a message transmission request, placing a message to be transmitted in a message transmission queue corresponding to a module generating the request;
acquiring port resources and a channel;
dequeuing a plurality of messages to be transmitted in the information transmission queue in response to the acquired port resources and the acquired channel meeting a dequeue condition;
packing the messages to be transmitted and the sequence numbers corresponding to the message transmission queues into a protocol packet;
and transmitting the protocol packet to a destination end through an interface.
In some embodiments, further comprising:
a separate CPU core is allocated for each module.
In some embodiments, further comprising:
and allocating different priorities to the message transmission queues corresponding to different modules so as to determine the sequence of acquiring the port resources and the channels according to the priorities.
In some embodiments, further comprising:
and responding to the receiving of the protocol packet by the destination, analyzing the protocol packet and putting the message in the protocol packet into a corresponding receiving queue according to the sequence number carried in the protocol packet.
In some embodiments, further comprising:
and utilizing a single CPU core to analyze the protocol packet.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of:
in response to receiving a message transmission request, placing a message to be transmitted in a message transmission queue corresponding to a module generating the request;
acquiring port resources and a channel;
dequeuing a plurality of messages to be transmitted in the information transmission queue in response to the acquired port resources and the acquired channel meeting a dequeue condition;
packing the messages to be transmitted and the sequence numbers corresponding to the message transmission queues into a protocol packet;
and transmitting the protocol packet to a destination end through an interface.
In some embodiments, further comprising:
a separate CPU core is allocated for each module.
In some embodiments, further comprising:
and allocating different priorities to the message transmission queues corresponding to different modules so as to determine the sequence of acquiring the port resources and the channels according to the priorities.
In some embodiments, further comprising:
and responding to the receiving of the protocol packet by the destination, analyzing the protocol packet and putting the message in the protocol packet into a corresponding receiving queue according to the sequence number carried in the protocol packet.
In some embodiments, further comprising:
and utilizing a single CPU core to analyze the protocol packet.
The invention has one of the following beneficial technical effects: the invention improves the layered message transmission mechanism and expands the layered message transmission mechanism into multi-queue message transmission. The method breaks through the limit of processing the message by one receiving and sending queue, expands to the parallel receiving and sending of the message by a plurality of module queues, and meets the requirement of the storage system on multi-IO high concurrency performance without damaging the high reliability and high availability of the storage system by matching the message queue serial number.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a message transmission method according to an embodiment of the present invention;
FIG. 2 is a flow chart of sending a message according to an embodiment of the present invention;
FIG. 3 is a flow chart of receiving a message according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a message transmission system according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a computer device provided in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
In the embodiment of the present invention, IO (Input/Output) refers to Input/Output; wl (window layer) refers to the window layer; CL (communication layer) refers to the communication layer; CIP (common Interface platform) common Interface platform; rdma (remote Direct Memory access) refers to remote Direct data access; nvme (nonvolatile Memory express) refers to the nonvolatile Memory protocol standard; SCSI (Small Computer System Enterprise) refers to the small Computer System interface protocol standard; EM (event manager) refers to an event manager; ca (cache) cache management; rd (raid) refers to disk array management.
According to an aspect of the present invention, an embodiment of the present invention provides a message transmission method, as shown in fig. 1, which may include the steps of:
s1, in response to receiving the message transmission request, putting the message to be transmitted into the message transmission queue corresponding to the module generating the request;
s2, acquiring port resources and channels;
s3, in response to the obtained port resources and channels meeting the dequeuing conditions, dequeuing a plurality of messages to be transmitted in the information transmission queue;
s4, packing the messages to be transmitted and the sequence numbers corresponding to the message transmission queue into a protocol packet;
and S5, transmitting the protocol packet to a destination end through an interface.
The invention improves the layered message transmission mechanism and expands the layered message transmission mechanism into multi-queue message transmission. The method breaks through the limit of processing the message by one receiving and sending queue, expands to the parallel receiving and sending of the message by a plurality of module queues, and meets the requirement of the storage system on multi-IO high concurrency performance without damaging the high reliability and high availability of the storage system by matching the message queue serial number.
In some embodiments several upper layer application modules EM, CA, RD with a relatively large traffic volume create the messaging queue separately. When sending the message, enqueuing the message into a specific queue to wait for the scheduling of a working thread; omq (sending message queues) are bound differently, concurrency is improved, and CPU resources after core expansion are fully utilized; each sending message queue comprises different sending message sequence numbers omq- > sqn, and the sending message sequence numbers correspond to the receiving message queue, so that stable and reliable message receiving and sending are guaranteed.
When receiving message, the packed message is in channel, after response and confirmation processing, according to different modules, it enters into different message receiving queues. When imq (receiving message queue) corresponding CPU core work thread is scheduled, comparing the carried sequence number, resolving out concrete message in sequence, and informing application module handler function to extract message. And after the message is analyzed and acquired, releasing the channel resources in a mode of circularly scheduling the working thread.
The multi-queue message transmission among cluster nodes keeps the hierarchical structure unchanged, and still uses the hierarchical division of an application layer, a window layer, an alternating current layer, a common interface platform and a drive protocol layer, but has a plurality of obvious changes. The CPU resource is remarkably improved, the CPU resource is improved from the original four-core and eight-core CPU to 64-core sharing, and more CPU cores can be allocated for message transmission and use. The SCSI transmission protocol is replaced by the NVMe protocol, so that the excellent characteristics (such as queue depth) of the NVMe protocol are fully exerted, and the IO communication performance is improved. The bottom communication link is changed from FC/NTB to RDMA, and RoCE technology is adopted, so that the reliable communication time delay between nodes is lower.
With the improvement of various hardware resources, the important details of a message transmission mechanism are adjusted in a matching way. The logic core of the CL layer is not bound to the port of the driving layer, and can be bound to the idle core independently. The upper limit of the number of port resource definitions is expanded, more channel resources are opened up for node communication, and the resource limit in the message dequeuing link is reduced. In the message response and confirmation link, different cores are respectively operated by a message sending channel and a message receiving channel, so that the coupling is reduced. The application module extracts the message, a handler synchronous mode is not adopted any more, an asynchronous mode of cross-thread work thread scheduling is adopted instead, and time delay of a message analysis and forwarding link is reduced.
In some embodiments, the messaging queues are opened up separately for the EM, CA, RD modules, with other messages still being buffered by the original queues. A group of sequence numbers is additionally arranged in the newly opened message queue to ensure that the message is reliably received and transmitted in sequence. Meanwhile, the newly opened message queue is bound to other CPU cores, so that the message queue is separated from the original queue and is concurrent. The message receiving and sending queue is combined with the work thread matrix to realize single message enqueue, when the thread is scheduled, multiple messages are circularly processed, namely, single message is input and multiple messages are output, and the thread scheduling resource consumption is reduced. The design of multiple queues and concurrent execution further save the scheduling consumption of the thread matrix.
In order to ensure that the message transmission between the nodes is stable and reliable, is not lost, is not out of order, is convenient to assemble and recombine, and a group of global sequence numbers are maintained in a message sending queue and a message receiving queue. Omq- > sqn at the transmitting end, imq- > sqn at the receiving end, and the transmitted message carries self-increment omq- > sqn as the message sequence number msg- > sqn. The receiving end analyzes the message sequence number msg- > sqn, compares the message sequence number msg- > sqn with imq- > sqn, analyzes the message body for forwarding if the matching is successful, and discards or continues to wait otherwise.
The multi-queue design sets a group of global serial numbers in each group of message queues, and the functions of the global serial numbers are the same as the functions of original serial numbers. In order to prevent the messages from mistakenly entering the queue to cause the confusion of the sequence number, module labels are added when the messages enter and exit the queue, and the normal shunting of the message receiving and sending is realized. In a group of message receiving and transmitting queues, organizing messages, adding sequence numbers, and analyzing and forwarding in sequence after receiving.
In some embodiments, further comprising:
a separate CPU core is allocated for each module.
Specifically, in the existing WL and CL hierarchy, the core division is as follows: logic- > fb, node- > fb, imq- > fb, omq- > fb. Login- > fb is carried forward to the existing port- > fb of the common interface platform layer CPI, and is consistent with the port- > fb, namely, is scheduled by the work thread matrix on the same core. And node- > fb, one CPU core is allocated every time a link of a node is established, and message response and retransmission are executed during scheduling. Imq- > fb and omq- > fb, derived from node- > fb, are node- > fb +1 and node- > fb +3, respectively. It can be seen that most of the work of the window layer and the ac layer is handled in the above core. In the embodiment of the present invention, the above core-splitting policy is adjusted appropriately. A CPU core is independently allocated to Login- > fb and is not sequentially supported to port- > fb any more, so that the two cores are prevented from sharing one core, and CPU resources are used up. Allocating CPU cores of omq _ em and imq _ em as node- > fb + n and node- > fb + n + 1; allocating CPU cores of omq _ ca and imq _ ca as node- > fb + n +2 and node- > fb + n + 3; and allocating the CPU cores of omq _ rd and imq _ rd as node- > fb + n +4 and node- > fb + n + 5. And (3) extracting a message link by an application module running under the original imq- > fb, performing asynchronous processing, and scheduling and executing CPU cores to be node- > fb.
Therefore, the newly-added message receiving and sending queue is independently bound with the core and separated from the original public message queue, and the logic bound core is separated from the port bound core of the driving layer, so that the core separation strategies of the window layer and the AC layer are adjusted, and the CPU resource after core expansion is fully utilized.
In some embodiments, the original cluster node communication uses SCSI protocol format for message body organization, and the current modification uses NVMe protocol format for message body organization.
NVMe is known as Nonvolatile Memory Express (Nonvolatile Memory standard). NVMe is a new storage protocol created based on performance, which significantly increases data access speed. NVMe redefines the processing command of the software layer, and does not adopt SCSI command specification. NVMe utilizes PCIe high-speed bus of computer, reduces CPU overhead, simplifies operation, reduces delay, and improves IOPS and throughput.
Depth of queue QD is another advantage of NVMe. SAS and AHCI can only be a single queue and the depth of each queue is also relatively low. Under the currently designed message transmission mechanism, the number of NVMe queues is set to 16, which is important for improving the capability of the server to process concurrent requests.
NVMe over Fabrics (NVMe-OF for short) is to use NVMe as a channel for connecting storage controllers instead OF FC and ISCSI in the past. For NVMe-oF, there are 3 choices for the transport type, NVMe-oF using fibre channel, NVMe-oF TCP, and NVMe-oF using RDMA, respectively. In the message transport mechanism oF the present invention, RDMA NVMe-oF is used. RDMA this specification employs Remote Direct Memory Access (RDMA) to enable data and memory to be transferred across a network between storage devices. RDMA is a way to exchange information between the main memories of two computers in a network, without involving the processor, cache, or operating system of either computer. Because RDMA bypasses the operating system, it is generally the fastest, least-overhead mechanism for transferring data over a network.
In some embodiments, further comprising:
and allocating different priorities to the message transmission queues corresponding to different modules so as to determine the sequence of acquiring the port resources and the channels according to the priorities.
Specifically, the transmission of the message requires port and channel resources, and different priorities may be set for each module, for example, the priority of the EM module is the highest, so that the message in the message queue corresponding to the EM module may be transmitted preferentially.
In some embodiments, further comprising:
and responding to the receiving of the protocol packet by the destination, analyzing the protocol packet and putting the message in the protocol packet into a corresponding receiving queue according to the sequence number carried in the protocol packet.
In some embodiments, further comprising:
and utilizing a single CPU core to analyze the protocol packet.
Referring to fig. 2 and fig. 3, a detailed description will be given of a multi-queue hierarchical message transmission flow proposed by the embodiment of the present invention, where node1 sends a message to node 2.
The message of the EM, CA and RD module directly enters a specific sending message queue, and when the thread matrix on the CPU core (omq- > fb) to be appointed is scheduled, the port resource and the idle channel are matched. If the condition is met, dequeuing from the WL level, assigning current omq- > sqn, entering CL level packaging (small message is combined to reach a specified length), and executing the NVMe protocol packet through the thread matrix scheduling on the logic- > fb core. And the organized NVMe protocol packet calls a CIP interface and is transmitted to the other node through the RDMA channel.
The node2 RDMA receives the message, uploads the message to the AC layer CL through CIP, analyzes the NVMe protocol packet at the AC layer, and assigns a value to the channel. And in node- > fb core-splitting scheduling, a WL layer is informed to carry out message response and retransmission, and a channel is enqueued (a message is contained in the channel). When omq- > fb core division scheduling, the channel of imq is unpacked in sequence, and a specific message is analyzed. Message content extraction is accomplished by the application module through asynchronous node- > fb scheduling. Subsequently, the communication layer CL is notified to release the channel resources.
The scheme provided by the embodiment of the invention increases the message receiving and sending queue according to the module window layer. A group of global sqn sequence numbers are maintained in each message receiving and transmitting queue, and the stable and reliable transmission of the messages can still be ensured. With the addition of the message receiving and sending queue, the core separation strategies related to the window layer and the communication layer are adjusted, the concurrency is improved, and CPU resources after core expansion are fully utilized. The transmission protocol is modified from SCSI to NVMe, the characteristics of low time delay and high concurrency of the NVMe are fully utilized, and the RDMA technology is used for realizing the cross-network transmission of data and memory between the storage devices. The technology of the invention fully utilizes hardware resources, reduces the communication time delay of the node, improves the IOPS and the throughput, and can ensure the stability and the reliability of the system.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a message transmission system 400, as shown in fig. 4, including:
a receiving module 401 configured to, in response to receiving a message transmission request, place a message to be transmitted in a message transmission queue corresponding to a module that generates the request;
an obtaining module 402 configured to obtain port resources and channels;
a dequeue module 403, configured to dequeue a plurality of to-be-transmitted messages in the information transmission queue in response to that the obtained port resources and channels satisfy a dequeue condition;
a packing module 404 configured to pack the messages to be transmitted and the sequence numbers corresponding to the message transmission queues into a protocol packet together;
a transmission module 405 configured to transmit the protocol packet to a destination through an interface.
In particular, in some embodiments, several upper layer application modules EM, CA, RD, which have a relatively large traffic volume, create the messaging queue separately. When sending the message, enqueuing the message into a specific queue to wait for the scheduling of a working thread; omq (sending message queues) are bound differently, concurrency is improved, and CPU resources after core expansion are fully utilized; each sending message queue comprises different sending message sequence numbers omq- > sqn, and the sending message sequence numbers correspond to the receiving message queue, so that stable and reliable message receiving and sending are guaranteed.
When receiving message, the packed message is in channel, after response and confirmation processing, according to different modules, it enters into different message receiving queues. When imq (receiving message queue) corresponding CPU core work thread is scheduled, comparing the carried sequence number, resolving out concrete message in sequence, and informing application module handler function to extract message. And after the message is analyzed and acquired, releasing the channel resources in a mode of circularly scheduling the working thread.
The multi-queue message transmission among cluster nodes keeps the hierarchical structure unchanged, and still uses the hierarchical division of an application layer, a window layer, an alternating current layer, a common interface platform and a drive protocol layer, but has a plurality of obvious changes. The CPU resource is remarkably improved, the CPU resource is improved from the original four-core and eight-core CPU to 64-core sharing, and more CPU cores can be allocated for message transmission and use. The SCSI transmission protocol is replaced by the NVMe protocol, so that the excellent characteristics (such as queue depth) of the NVMe protocol are fully exerted, and the IO communication performance is improved. The bottom communication link is changed from FC/NTB to RDMA, and RoCE technology is adopted, so that the reliable communication time delay between nodes is lower.
With the improvement of various hardware resources, the important details of a message transmission mechanism are adjusted in a matching way. The logic core of the CL layer is not bound to the port of the driving layer, and can be bound to the idle core independently. The upper limit of the number of port resource definitions is expanded, more channel resources are opened up for node communication, and the resource limit in the message dequeuing link is reduced. In the message response and confirmation link, different cores are respectively operated by a message sending channel and a message receiving channel, so that the coupling is reduced. The application module extracts the message, a handler synchronous mode is not adopted any more, an asynchronous mode of cross-thread work thread scheduling is adopted instead, and time delay of a message analysis and forwarding link is reduced.
In some embodiments, the messaging queues are opened up separately for the EM, CA, RD modules, with other messages still being buffered by the original queues. A group of sequence numbers is additionally arranged in the newly opened message queue to ensure that the message is reliably received and transmitted in sequence. Meanwhile, the newly opened message queue is bound to other CPU cores, so that the message queue is separated from the original queue and is concurrent. The message receiving and sending queue is combined with the work thread matrix to realize single message enqueue, when the thread is scheduled, multiple messages are circularly processed, namely, single message is input and multiple messages are output, and the thread scheduling resource consumption is reduced. The design of multiple queues and concurrent execution further save the scheduling consumption of the thread matrix.
In order to ensure that the message transmission between the nodes is stable and reliable, is not lost, is not out of order, is convenient to assemble and recombine, and a group of global sequence numbers are maintained in a message sending queue and a message receiving queue. Omq- > sqn at the transmitting end, imq- > sqn at the receiving end, and the transmitted message carries self-increment omq- > sqn as the message sequence number msg- > sqn. The receiving end analyzes the message sequence number msg- > sqn, compares the message sequence number msg- > sqn with imq- > sqn, analyzes the message body for forwarding if the matching is successful, and discards or continues to wait otherwise.
The multi-queue design sets a group of global serial numbers in each group of message queues, and the functions of the global serial numbers are the same as the functions of original serial numbers. In order to prevent the messages from mistakenly entering the queue to cause the confusion of the sequence number, module labels are added when the messages enter and exit the queue, and the normal shunting of the message receiving and sending is realized. In a group of message receiving and transmitting queues, organizing messages, adding sequence numbers, and analyzing and forwarding in sequence after receiving.
In some embodiments, further comprising a first assignment module configured to:
a separate CPU core is allocated for each module.
Specifically, in the existing WL and CL hierarchy, the core division is as follows: logic- > fb, node- > fb, imq- > fb, omq- > fb. Login- > fb is carried forward to the existing port- > fb of the common interface platform layer CPI, and is consistent with the port- > fb, namely, is scheduled by the work thread matrix on the same core. And node- > fb, one CPU core is allocated every time a link of a node is established, and message response and retransmission are executed during scheduling. Imq- > fb and omq- > fb, derived from node- > fb, are node- > fb +1 and node- > fb +3, respectively. It can be seen that most of the work of the window layer and the ac layer is handled in the above core. In the embodiment of the present invention, the above core-splitting policy is adjusted appropriately. A CPU core is independently allocated to Login- > fb and is not sequentially supported to port- > fb any more, so that the two cores are prevented from sharing one core, and CPU resources are used up. Allocating CPU cores of omq _ em and imq _ em as node- > fb + n and node- > fb + n + 1; allocating CPU cores of omq _ ca and imq _ ca as node- > fb + n +2 and node- > fb + n + 3; and allocating the CPU cores of omq _ rd and imq _ rd as node- > fb + n +4 and node- > fb + n + 5. And (3) extracting a message link by an application module running under the original imq- > fb, performing asynchronous processing, and scheduling and executing CPU cores to be node- > fb.
In some embodiments, further comprising a second assignment module configured to:
and allocating different priorities to the message transmission queues corresponding to different modules so as to determine the sequence of acquiring the port resources and the channels according to the priorities.
In some embodiments, further comprising:
and responding to the receiving of the protocol packet by the destination, analyzing the protocol packet and putting the message in the protocol packet into a corresponding receiving queue according to the sequence number carried in the protocol packet.
In some embodiments, further comprising:
and utilizing a single CPU core to analyze the protocol packet.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 5, an embodiment of the present invention further provides a computer apparatus 501, comprising:
at least one processor 520; and
a memory 510, the memory 510 storing a computer program 511 executable on the processor, the processor 520 executing the program to perform the steps of:
s1, in response to receiving the message transmission request, putting the message to be transmitted into the message transmission queue corresponding to the module generating the request;
s2, acquiring port resources and channels;
s3, in response to the obtained port resources and channels meeting the dequeuing conditions, dequeuing a plurality of messages to be transmitted in the information transmission queue;
s4, packing the messages to be transmitted and the sequence numbers corresponding to the message transmission queue into a protocol packet;
and S5, transmitting the protocol packet to a destination end through an interface.
In some embodiments, further comprising:
a separate CPU core is allocated for each module.
In some embodiments, further comprising:
and allocating different priorities to the message transmission queues corresponding to different modules so as to determine the sequence of acquiring the port resources and the channels according to the priorities.
In some embodiments, further comprising:
and responding to the receiving of the protocol packet by the destination, analyzing the protocol packet and putting the message in the protocol packet into a corresponding receiving queue according to the sequence number carried in the protocol packet.
In some embodiments, further comprising:
and utilizing a single CPU core to analyze the protocol packet.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 6, an embodiment of the present invention further provides a computer-readable storage medium 601, where the computer-readable storage medium 601 stores computer program instructions 610, and the computer program instructions 610, when executed by a processor, perform the following steps:
s1, in response to receiving the message transmission request, putting the message to be transmitted into the message transmission queue corresponding to the module generating the request;
s2, acquiring port resources and channels;
s3, in response to the obtained port resources and channels meeting the dequeuing conditions, dequeuing a plurality of messages to be transmitted in the information transmission queue;
s4, packing the messages to be transmitted and the sequence numbers corresponding to the message transmission queue into a protocol packet;
and S5, transmitting the protocol packet to a destination end through an interface.
In some embodiments, further comprising:
a separate CPU core is allocated for each module.
In some embodiments, further comprising:
and allocating different priorities to the message transmission queues corresponding to different modules so as to determine the sequence of acquiring the port resources and the channels according to the priorities.
In some embodiments, further comprising:
and responding to the receiving of the protocol packet by the destination, analyzing the protocol packet and putting the message in the protocol packet into a corresponding receiving queue according to the sequence number carried in the protocol packet.
In some embodiments, further comprising:
and utilizing a single CPU core to analyze the protocol packet.
Finally, it should be noted that, as will be understood by those skilled in the art, all or part of the processes of the methods of the above embodiments may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above.
Further, it should be appreciated that the computer-readable storage media (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A method for message transmission, comprising the steps of:
in response to receiving a message transmission request, placing a message to be transmitted in a message transmission queue corresponding to a module generating the request;
acquiring port resources and a channel;
dequeuing a plurality of messages to be transmitted in the information transmission queue in response to the acquired port resources and the acquired channel meeting a dequeue condition;
packing the messages to be transmitted and the sequence numbers corresponding to the message transmission queues into a protocol packet;
and transmitting the protocol packet to a destination end through an interface.
2. The method of claim 1, further comprising:
a separate CPU core is allocated for each module.
3. The method of claim 1, further comprising:
and allocating different priorities to the message transmission queues corresponding to different modules so as to determine the sequence of acquiring the port resources and the channels according to the priorities.
4. The method of claim 1, further comprising:
and responding to the receiving of the protocol packet by the destination, analyzing the protocol packet and putting the message in the protocol packet into a corresponding receiving queue according to the sequence number carried in the protocol packet.
5. The method of claim 4, further comprising:
and utilizing a single CPU core to analyze the protocol packet.
6. A message transmission system, comprising:
the receiving module is configured to respond to the received message transmission request and put the message to be transmitted into a message transmission queue corresponding to the module generating the request;
an acquisition module configured to acquire port resources and channels;
the dequeuing module is configured to dequeue a plurality of messages to be transmitted in the information transmission queue in response to the acquired port resources and the acquired channel meeting a dequeuing condition;
the packing module is configured to pack the messages to be transmitted and the sequence numbers corresponding to the message transmission queues into a protocol package together;
and the transmission module is configured to transmit the protocol packet to a destination end through an interface.
7. The system of claim 6, further comprising a first assignment module configured to:
a separate CPU core is allocated for each module.
8. The system of claim 6, further comprising a second assignment module configured to:
and allocating different priorities to the message transmission queues corresponding to different modules so as to determine the sequence of acquiring the port resources and the channels according to the priorities.
9. A computer device, comprising:
at least one processor; and
memory storing a computer program operable on the processor, characterized in that the processor executes the program to perform the steps of the method according to any of claims 1-5.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1-5.
CN202111065324.XA 2021-09-12 2021-09-12 Message transmission method, system, equipment and medium Active CN114363269B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111065324.XA CN114363269B (en) 2021-09-12 2021-09-12 Message transmission method, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111065324.XA CN114363269B (en) 2021-09-12 2021-09-12 Message transmission method, system, equipment and medium

Publications (2)

Publication Number Publication Date
CN114363269A true CN114363269A (en) 2022-04-15
CN114363269B CN114363269B (en) 2023-07-25

Family

ID=81095517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111065324.XA Active CN114363269B (en) 2021-09-12 2021-09-12 Message transmission method, system, equipment and medium

Country Status (1)

Country Link
CN (1) CN114363269B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314473A (en) * 2022-06-21 2022-11-08 中化学交通建设集团有限公司 Communication method based on LoRa gateway, related equipment and monitoring system
CN116760510A (en) * 2023-08-15 2023-09-15 苏州浪潮智能科技有限公司 Message sending method, message receiving method, device and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019128535A1 (en) * 2017-12-25 2019-07-04 腾讯科技(深圳)有限公司 Message management method and device, and storage medium
CN111756811A (en) * 2020-05-29 2020-10-09 苏州浪潮智能科技有限公司 Method, system, device and medium for actively pushing distributed system
CN112187655A (en) * 2020-09-29 2021-01-05 苏州浪潮智能科技有限公司 Controller cluster of storage system, message transmission method of controller cluster and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019128535A1 (en) * 2017-12-25 2019-07-04 腾讯科技(深圳)有限公司 Message management method and device, and storage medium
CN111756811A (en) * 2020-05-29 2020-10-09 苏州浪潮智能科技有限公司 Method, system, device and medium for actively pushing distributed system
CN112187655A (en) * 2020-09-29 2021-01-05 苏州浪潮智能科技有限公司 Controller cluster of storage system, message transmission method of controller cluster and readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314473A (en) * 2022-06-21 2022-11-08 中化学交通建设集团有限公司 Communication method based on LoRa gateway, related equipment and monitoring system
CN116760510A (en) * 2023-08-15 2023-09-15 苏州浪潮智能科技有限公司 Message sending method, message receiving method, device and equipment
CN116760510B (en) * 2023-08-15 2023-11-03 苏州浪潮智能科技有限公司 Message sending method, message receiving method, device and equipment

Also Published As

Publication number Publication date
CN114363269B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN110892380B (en) Data processing unit for stream processing
CN108476208B (en) Multipath transmission design
US11178213B2 (en) Automated configuration based deployment of stream processing pipeline
US6418478B1 (en) Pipelined high speed data transfer mechanism
US11509530B2 (en) Impartial buffering in stream processing
US7613849B2 (en) Integrated circuit and method for transaction abortion
US20060026169A1 (en) Communication method with reduced response time in a distributed data processing system
CN114363269B (en) Message transmission method, system, equipment and medium
TW200915084A (en) Allocating network adapter resources among logical partitions
US8479219B2 (en) Allocating space in message queue for heterogeneous messages
CN106571978B (en) Data packet capturing method and device
US8566833B1 (en) Combined network and application processing in a multiprocessing environment
CN113994321A (en) Mapping NVME packets on a fabric using virtual output queues
CN111309700B (en) Control method and system for multi-sharing directory tree
CN114928579A (en) Data processing method and device, computer equipment and storage medium
CN106941522B (en) Lightweight distributed computing platform and data processing method thereof
US7460550B2 (en) Storage structure and method utilizing multiple protocol processor units
EP2979196B1 (en) System and method for network provisioning
CN113553137A (en) DPDK-based access capability network element high-speed data processing method under NFV architecture
US7814182B2 (en) Ethernet virtualization using automatic self-configuration of logic
US11916998B2 (en) Multi-cloud edge system
CN109167740B (en) Data transmission method and device
CN112804166B (en) Message receiving and sending method, device and storage medium
RU2507569C1 (en) Method for parallel processing of ordered data streams
CN116760510B (en) Message sending method, message receiving method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant