CN110830386A - Method, device and system for preserving order of messages - Google Patents

Method, device and system for preserving order of messages Download PDF

Info

Publication number
CN110830386A
CN110830386A CN201911110264.1A CN201911110264A CN110830386A CN 110830386 A CN110830386 A CN 110830386A CN 201911110264 A CN201911110264 A CN 201911110264A CN 110830386 A CN110830386 A CN 110830386A
Authority
CN
China
Prior art keywords
data frame
message
kernel
buffer
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911110264.1A
Other languages
Chinese (zh)
Other versions
CN110830386B (en
Inventor
叶飞
许林
郑波
黄波
杜振业
李超然
梁玉涛
王庆年
张昕
黄灿
付建强
淳增辉
徐鹏飞
陈昊
汤灵
邓松
郭浩
范颖
周愚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Institute Of Ship Communication (china Shipbuilding Industry Corp No 722 Institute)
Original Assignee
Wuhan Institute Of Ship Communication (china Shipbuilding Industry Corp No 722 Institute)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute Of Ship Communication (china Shipbuilding Industry Corp No 722 Institute) filed Critical Wuhan Institute Of Ship Communication (china Shipbuilding Industry Corp No 722 Institute)
Priority to CN201911110264.1A priority Critical patent/CN110830386B/en
Publication of CN110830386A publication Critical patent/CN110830386A/en
Application granted granted Critical
Publication of CN110830386B publication Critical patent/CN110830386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/25Arrangements specific to fibre transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/31Flow control; Congestion control by tagging of packets, e.g. using discard eligibility [DE] bits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Abstract

The invention discloses a method, a device and a system for preserving the order of messages, and belongs to the field of unidirectional data transmission. The method comprises the following steps: at a message sending end, a network interface receives a plurality of messages and distributes the plurality of messages to each kernel; each kernel processes the messages distributed to the kernels, encapsulates the messages and the kernel identifications into data frames together, and stores the data frames in a cache queue corresponding to the kernels; the host sends the data frames in each buffer queue to the one-way transmission card; the unidirectional transmission card converts the data frame into a transmission signal and sends the transmission signal to the optical fiber; at a message receiving end, the one-way transmission card receives a transmission signal from the optical fiber and converts the transmission signal into a data frame; the host stores the message in the data frame in a cache queue corresponding to the kernel to which the kernel identifier in the data frame belongs according to the kernel identifier in the data frame; the method comprises the steps that a kernel reads a message in a cache queue corresponding to the kernel for processing; the network interface sends out the message. The method and the device can realize message order preservation.

Description

Method, device and system for preserving order of messages
Technical Field
The present disclosure relates to the field of unidirectional data transmission, and in particular, to a method, an apparatus, and a system for preserving an order of a packet.
Background
A message is a unit of data exchanged and transmitted in the network, i.e. a block of data that a station sends at one time. The message contains complete data information to be sent, and the message is very inconsistent in length, unlimited in length and variable. When transmitting the message, some control information is added to the message header, and the message is packaged into a group, a packet and a frame for transmission.
Usually, a sending end sends messages in sequence according to the time sequence, and the messages reach a receiving end through the forwarding of a plurality of intermediate nodes, but the sequence of receiving the messages by the receiving end may be different from the sequence of sending the messages by the sending end, so that the messages are out of sequence.
In a common bidirectional transmission network, a network device generally adopts a mode based on a sequence number, a window, a retransmission mechanism and the like to ensure the correctness and the integrity of data at a receiving end.
In the course of implementing the present disclosure, the inventors found that the prior art has at least the following problems:
on the unidirectional transmission link based on the unidirectional transmission card, there is no network loop, the Transmission Control Protocol (TCP) adopting retransmission mechanism will not work normally, if the message is out of order during transmission, the receiving end will not recover the data correctly. On a unidirectional transmission link based on a unidirectional transmission card, due to the limitation of the size of a buffer area of a receiving end, correct messages may not arrive in a receiving window of a message receiving end due to disorder, and correct reception of Data is affected.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device and a system for preserving the order of messages, which can realize the order preservation of the messages and ensure the reliability of data transmission on a unidirectional transmission link. The technical scheme is as follows:
in a first aspect, an embodiment of the present disclosure provides a method for preserving an order of a packet, where the method is applied to a first host of a packet sending end, where the first host includes a first network interface, a multi-core processor composed of a plurality of first cores, a plurality of first cache queues in one-to-one correspondence with the plurality of first cores, and a first unidirectional transmission card; each first core is connected with the first network interface, each first cache queue is connected with the first core corresponding to the first cache queue, and the first unidirectional transmission card is connected with the plurality of first cache queues; the method comprises the following steps:
the first network interface receives a plurality of messages and distributes the messages to the first cores;
each first kernel processes the message distributed to the first kernel, encapsulates the processed message and the identifier of the kernel into a data frame, and stores the data frame in a first cache queue corresponding to the first kernel;
the first host sends the data frames in each first buffer queue to the first unidirectional transmission card;
and the first one-way transmission card converts the data frame into a transmission signal and sends the transmission signal to an optical fiber.
Optionally, the first unidirectional transport card includes a first bus interface, a plurality of third buffer queues, a plurality of first processing circuits in one-to-one correspondence with the plurality of third buffer queues, a merging circuit, and a sending interface, where the first bus interface is connected to the plurality of first buffer queues, the plurality of third buffer queues are connected to the first bus interface, each of the first processing circuits is connected to a third buffer queue corresponding to the first processing circuit, the merging circuit is connected to the plurality of first processing circuits, and the sending interface is connected to the merging circuit;
the first unidirectional transmission card converts the data frame into a transmission signal and sends the transmission signal to an optical fiber, and the method comprises the following steps:
the first bus interface receives a plurality of data frames in sequence, sets a label for each data frame according to the sequence of receiving the plurality of data frames, and encapsulates the label set for the data frame in the data frame;
the first unidirectional transmission card stores a plurality of data frames in the plurality of third buffer queues;
each first processing circuit reads a data frame in a third buffer queue corresponding to the first processing circuit and converts the data frame into a transmission signal;
the merging circuit merges the transmission signals converted by the plurality of first processing circuits into a queue;
and the sending interface sends out the combined transmission signals in sequence.
In a second aspect, an embodiment of the present disclosure provides a method for preserving an order of a packet, where the method is applied to a second host at a packet receiving end, where the second host includes a second network interface, a multi-core processor composed of a plurality of second cores, a plurality of second cache queues in one-to-one correspondence with the plurality of second cores, and a second unidirectional transmission card; each second core is connected with the second network interface, each second cache queue is connected with the second core corresponding to the second cache queue, and the second unidirectional transmission card is connected with the plurality of second cache queues; the method comprises the following steps:
the second unidirectional transmission card receives a transmission signal from an optical fiber and converts the transmission signal into a data frame; the data frame comprises a message and an identifier of a kernel;
the second host stores the message in the data frame in a second cache queue corresponding to a second kernel corresponding to the identifier of the kernel in the data frame according to the identifier of the kernel in the data frame;
the second kernel reads the message in a second cache queue corresponding to the second kernel for processing;
and the second network interface sends out the processed message.
Optionally, the second unidirectional transmission card includes a second bus interface, a plurality of fourth buffer queues, a plurality of second processing circuits corresponding to the plurality of fourth buffer queues one by one, an allocating circuit, and a receiving interface, where the second bus interface is connected to the plurality of second buffer queues respectively, the plurality of fourth buffer queues are connected to the second bus interface respectively, each of the second processing circuits is connected to a fourth buffer queue corresponding to the second processing circuit, the allocating circuit is connected to the plurality of second processing circuits respectively, and the receiving interface is connected to the allocating circuit;
the second unidirectional transmission card receives a transmission signal from an optical fiber, converts the transmission signal into a data frame, and comprises:
the receiving interface receives the transmission signal;
the distribution circuit distributes the transmission signal to the plurality of second processing circuits;
each second processing circuit converts the transmission signal distributed to the second processing circuit into a data frame and stores the data frame in a fourth buffer queue corresponding to the second processing circuit; the data frame comprises a message, an identifier and a label of a kernel;
and the second bus interface sequentially sends out the data frames in the fourth buffer queues according to the sequence of the small-to-large numbers in the data frames in the fourth buffer queues.
In a third aspect, an embodiment of the present disclosure provides a device for preserving an order of a packet, where the device is a first host of a packet sending end, and the first host includes a first network interface, a multi-core processor composed of a plurality of first cores, a plurality of first cache queues in one-to-one correspondence with the plurality of first cores, and a first unidirectional transmission card; each first core is connected with the first network interface, each first cache queue is connected with the first core corresponding to the first cache queue, and the first unidirectional transmission card is connected with the plurality of first cache queues;
the first network interface is configured to receive a plurality of packets, and distribute the plurality of packets to each of the first cores;
each first core is configured to process the packet allocated to the first core, encapsulate the processed packet and the identifier of the core into a data frame, and store the data frame in a first cache queue corresponding to the first core;
the first host is used for sending the data frames in each first buffer queue to the first unidirectional transmission card;
and the first one-way transmission card is used for converting the data frame into a transmission signal and sending the transmission signal to an optical fiber.
Optionally, the first unidirectional transport card includes a first bus interface, a plurality of third buffer queues, a plurality of first processing circuits in one-to-one correspondence with the plurality of third buffer queues, a merging circuit, and a sending interface, where the first bus interface is connected to the plurality of first buffer queues, the plurality of third buffer queues are connected to the first bus interface, each of the first processing circuits is connected to a third buffer queue corresponding to the first processing circuit, the merging circuit is connected to the plurality of first processing circuits, and the sending interface is connected to the merging circuit;
the first bus interface is used for receiving a plurality of data frames in sequence, setting labels for the data frames according to the sequence of receiving the data frames, and encapsulating the labels set for the data frames in the data frames;
the first unidirectional transmission card is used for storing a plurality of data frames in the plurality of third buffer queues;
each first processing circuit is configured to read a data frame in a third buffer queue corresponding to the first processing circuit, and convert the data frame into a transmission signal;
the merging circuit is used for merging the transmission signals converted by the plurality of first processing circuits into a queue;
and the sending interface is used for sending out the combined transmission signals in sequence.
In a fourth aspect, an embodiment of the present disclosure provides a device for preserving an order of a packet, where the device is a second host of a packet receiving end, and the second host includes a second network interface, a multi-core processor composed of a plurality of second cores, a plurality of second cache queues in one-to-one correspondence with the plurality of second cores, and a second unidirectional transmission card; each second core is connected with the second network interface, each second cache queue is connected with the second core corresponding to the second cache queue, and the second unidirectional transmission card is connected with the plurality of second cache queues;
the second unidirectional transmission card is used for receiving a transmission signal from an optical fiber and converting the transmission signal into a data frame; the data frame comprises a message and an identifier of a kernel;
the second host is used for storing the message in the data frame in a second cache queue corresponding to a second kernel corresponding to the identifier of the kernel in the data frame according to the identifier of the kernel in the data frame;
the second core is used for reading the message in a second cache queue corresponding to the second core for processing;
and the second network interface is used for sending out the processed message.
Optionally, the second unidirectional transmission card includes a second bus interface, a plurality of fourth buffer queues, a plurality of second processing circuits corresponding to the plurality of fourth buffer queues one by one, an allocating circuit, and a receiving interface, where the second bus interface is connected to the plurality of second buffer queues respectively, the plurality of fourth buffer queues are connected to the second bus interface respectively, each of the second processing circuits is connected to a fourth buffer queue corresponding to the second processing circuit, the allocating circuit is connected to the plurality of second processing circuits respectively, and the receiving interface is connected to the allocating circuit;
the receiving interface is used for receiving the transmission signal;
the distribution circuit is used for distributing the transmission signals to the plurality of second processing circuits;
each second processing circuit is configured to convert the transmission signal allocated to the second processing circuit into a data frame, and store the data frame in a fourth buffer queue corresponding to the second processing circuit; the data frame comprises a message, an identifier and a label of a kernel;
and the second bus interface is used for sequentially sending out the data frames in the fourth buffer queues according to the sequence of the small-to-large numbers in the data frames in the fourth buffer queues.
In a fifth aspect, an embodiment of the present disclosure provides a system for preserving an order of a message, where the system includes a first host at a message sending end and a second host at a message receiving end; the first host comprises a first network interface, a multi-core processor consisting of a plurality of first cores, a plurality of first cache queues in one-to-one correspondence with the plurality of first cores, and a first unidirectional transmission card; each first core is connected with the first network interface, each first cache queue is connected with the first core corresponding to the first cache queue, and the first unidirectional transmission card is connected with the plurality of first cache queues; the second host comprises a second network interface, a multi-core processor consisting of a plurality of second cores, a plurality of second cache queues in one-to-one correspondence with the plurality of second cores, and a second unidirectional transmission card; each second core is connected with the second network interface, each second cache queue is connected with the second core corresponding to the second cache queue, and the second unidirectional transmission card is connected with the plurality of second cache queues;
the first network interface is configured to receive a plurality of packets, and distribute the plurality of packets to each of the first cores;
each first core is configured to process the packet allocated to the first core, encapsulate the processed packet and the identifier of the core into a data frame, and store the data frame in a first cache queue corresponding to the first core;
the first host is used for sending the data frames in each first buffer queue to the first unidirectional transmission card;
the first unidirectional transmission card is used for converting the data frame into a transmission signal and sending the transmission signal to an optical fiber;
the second unidirectional transmission card is used for receiving a transmission signal from an optical fiber and converting the transmission signal into a data frame; the data frame comprises a message and an identifier of a kernel;
the second host is used for storing the message in the data frame in a second cache queue corresponding to a second kernel corresponding to the identifier of the kernel in the data frame according to the identifier of the kernel in the data frame;
the second core is used for reading the message in a second cache queue corresponding to the second core for processing;
and the second network interface is used for sending out the processed message.
Optionally, the first unidirectional transport card includes a first bus interface, a plurality of third buffer queues, a plurality of first processing circuits in one-to-one correspondence with the plurality of third buffer queues, a merging circuit, and a sending interface, where the first bus interface is connected to the plurality of first buffer queues, the plurality of third buffer queues are connected to the first bus interface, each of the first processing circuits is connected to a third buffer queue corresponding to the first processing circuit, the merging circuit is connected to the plurality of first processing circuits, and the sending interface is connected to the merging circuit; the second unidirectional transmission card comprises a second bus interface, a plurality of fourth buffer queues, a plurality of second processing circuits in one-to-one correspondence with the fourth buffer queues, a distribution circuit and a receiving interface, wherein the second bus interface is respectively connected with the second buffer queues, the fourth buffer queues are respectively connected with the second bus interface, each second processing circuit is connected with the fourth buffer queue corresponding to the second processing circuit, the distribution circuit is respectively connected with the second processing circuits, and the receiving interface is connected with the distribution circuit;
the first bus interface is used for receiving a plurality of data frames in sequence, setting labels for the data frames according to the sequence of receiving the data frames, and encapsulating the labels set for the data frames in the data frames;
the first unidirectional transmission card is used for storing a plurality of data frames in the plurality of third buffer queues;
each first processing circuit is configured to read a data frame in a third buffer queue corresponding to the first processing circuit, and convert the data frame into a transmission signal;
the merging circuit is used for merging the transmission signals converted by the plurality of first processing circuits into a queue;
the transmission interface is used for sequentially transmitting the combined transmission signals;
the receiving interface is used for receiving the transmission signal;
the distribution circuit is used for distributing the transmission signals to the plurality of second processing circuits;
each second processing circuit is configured to convert the transmission signal allocated to the second processing circuit into a data frame, and store the data frame in a fourth buffer queue corresponding to the second processing circuit; the data frame comprises a message, an identifier and a label of a kernel;
and the second bus interface is used for sequentially sending out the data frames in the fourth buffer queues according to the sequence of the small-to-large numbers in the data frames in the fourth buffer queues.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
the message receiving end can distribute the message to the kernel belonging to the kernel identifier for processing according to the kernel identifier in the data frame, so that the same message is distributed to the kernel with the same identifier for processing at the message sending end and the message receiving end, the sequence of the message in the host as the message receiving end is ensured to be consistent with the sequence of the message in the host as the message sending end, the message order preservation is realized, the network transmission performance is favorably improved, and the method is particularly suitable for data transmission of a one-way transmission link.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is an application scenario diagram of a method for preserving an order of a packet according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a method for preserving an order of a packet according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a method for preserving an order of a packet according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an implementation of message order preservation under a unidirectional transport card structure according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating an implementation of message order preservation under another unidirectional transport card structure according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
First, an application scenario of the method for preserving an order of a packet provided in the embodiment of the present disclosure is briefly introduced below. At present, the increasing speed of the processor performance is far slower than the requirement of data increase on the processor performance, the processor cannot meet the performance requirement of high-performance computing application software, and the adopted solution is to adopt a heterogeneous computing mode of a special coprocessor to improve the computing performance, for example, the operation related to data transmission is handed to a one-way transmission card for processing, and the processor is focused on the processing of the information contained in the data. In addition, the processor can adopt a multi-core processor, namely, at least two computing engines (cores) are integrated in one processor, so that the processing speed and the processing performance of the processor can be effectively improved, the problem that corresponding performance cannot be improved due to excessive heat generated by the speed improvement of a single-core processor is solved, and the cost of the processor is greatly improved due to the speed improvement. And the one-way transmission card can improve the security of the network.
Fig. 1 is an application scenario diagram of a method for preserving an order of a packet according to an embodiment of the present disclosure. Referring to fig. 1, a first host 10 serving as a message sending end includes a first network interface 11, a multi-core processor including a plurality of first cores 12, a plurality of first buffers 13 corresponding to the plurality of first cores 12 one to one, and a first unidirectional transport card 14; the first network interface 11 is used for receiving a message providing interface from a local user; the plurality of first cores 12 are respectively connected to the first network interface 11, and are used for respectively processing different messages; each first buffer 13 is connected to the corresponding first core 12, and is configured to store the packet processed by the corresponding first core 12; the first unidirectional transmission card 14 is respectively connected with the plurality of first buffers 13, serves as a network device of the host, shares data transmission related operations such as encryption, encoding and the like for the multi-core processor, and realizes transmission of messages between two hosts.
The second host 20 as a message receiving end includes a second network interface 21, a multi-core processor composed of a plurality of second cores 22, a plurality of second buffers 23 corresponding to the plurality of second cores 22 one by one, and a second unidirectional transmission card 24. The second network interface 21 is a message providing interface for sending messages to local users; the plurality of second cores 22 are respectively connected to the second network interface 21, and are used for respectively processing different messages; each second buffer 23 is connected to the corresponding second core 22, and is configured to store the message processed by the corresponding second core 22; the second unidirectional transmission card 24 is respectively connected with the plurality of second buffers 23, and is used as a network device of the host to share data transmission related operations, such as decryption, decoding and the like, for the multi-core processor, so as to realize the transmission of the message between the two hosts.
In practical application, because the multi-core processor is composed of a plurality of cores, and the time for processing the messages by each core is different, the precedence order of the messages in the unidirectional transmission card may be different from the precedence order of the messages in the network interface in the same host, so that the precedence order of the messages in the first host 10 serving as the message receiving end is different from the precedence order of the messages in the second host 20 serving as the message sending end, that is, a message disorder is generated. For example, in the first host 10 as the message sending end, although the first network interface 11 receives the first message first and then receives the second message, the time for the first core 12 to process the first message assigned to the first message is longer than the time for the first core 12 to process the second message assigned to the second message, so that the first unidirectional transport card 14 sends the second message first and then sends the first message, and therefore, in the second host 20 as the message receiving end, the second message is received first and then the first message is received, and the sequence of the messages in the first host 10 as the message receiving end is different from the sequence of the messages in the second host 200 as the message sending end, thereby generating message disorder.
In order to solve the above problem, an embodiment of the present disclosure provides a method for preserving an order of a message, where the method is applicable to a first host of a message sending end. The first host comprises a first network interface, a multi-core processor consisting of a plurality of first cores, a plurality of first cache queues in one-to-one correspondence with the plurality of first cores, and a first unidirectional transmission card; each first core is connected with a first network interface, each first cache queue is connected with the first core corresponding to the first cache queue, and the first unidirectional transmission card is connected with the plurality of first cache queues. Fig. 2 is a flowchart of a method for preserving an order of a packet according to an embodiment of the present disclosure. Referring to fig. 2, the method includes:
step 101: the first network interface receives a plurality of messages and distributes the plurality of messages to each first inner core.
Step 102: each first kernel processes the message distributed to the first kernel, encapsulates the processed message and the identifier of the kernel into a data frame, and stores the data frame in a first cache queue corresponding to the first kernel.
Step 103: and the first host sends the data frames in each first buffer queue to the first unidirectional transmission card.
Step 104: the first unidirectional transmission card converts the data frame into a transmission signal and transmits the transmission signal onto the optical fiber.
In the message sending end, the identifier of the kernel for processing the message and the message are encapsulated into the data frame, so that in the message receiving end, the message can be distributed to the kernel corresponding to the identifier of the kernel for processing according to the identifier of the kernel in the data frame, and the same message is distributed to the kernel with the same identifier for processing at the message sending end and the message receiving end, thereby ensuring that the sequence of the message in the host as the message receiving end is consistent with the sequence of the message in the host as the message sending end, realizing the message order preservation, being beneficial to improving the network transmission performance and being particularly suitable for the data transmission of a one-way transmission link.
The cache corresponding to the kernel is realized by adopting a cache queue, and the reading and writing of the queue are realized by adopting a first-in first-out mode, so that the order of each data frame processed by the same kernel can be ensured to be unchanged, the order of the message is preserved, the improvement of the network transmission performance is facilitated, and the method is particularly suitable for the data transmission of a unidirectional transmission link.
Correspondingly, the embodiment of the disclosure provides a message order-preserving method, which is applicable to a second host of a message receiving end. The second host comprises a second network interface, a multi-core processor consisting of a plurality of second cores, a plurality of second cache queues in one-to-one correspondence with the plurality of second cores, and a second unidirectional transmission card; each second core is connected with a second network interface, each second cache queue is connected with the core corresponding to the second cache queue, and the second one-way transmission card is connected with the plurality of second cache queues. Fig. 3 is a flowchart of a method for preserving an order of a packet according to an embodiment of the present disclosure. Referring to fig. 3, the method includes:
step 201: the second unidirectional transmission card receives the transmission signal from the optical fiber, converts the transmission signal into a data frame, and sends the data frame to the host.
In this embodiment, the data frame includes the message and the identifier of the kernel.
Step 202: and the second host stores the message in the data frame in a second cache queue corresponding to the second kernel corresponding to the identifier of the kernel in the data frame according to the identifier of the kernel in the data frame.
Step 203: and the second kernel reads the message in the second cache queue corresponding to the second kernel for processing.
Step 204: and the second network interface sends out the processed message.
The received transmission signal is converted into the data frame comprising the message and the identifier of the kernel, and the message in the data frame is distributed to the kernel corresponding to the identifier of the kernel in the data frame for processing according to the identifier of the kernel in the data frame, so that the same message is distributed to the kernel with the same identifier for processing at the message sending end and the message receiving end, the sequence of the message in the host as the message receiving end is ensured to be consistent with the sequence of the message in the host as the message sending end, the message order preservation is realized, the network transmission performance is favorably improved, and the method is particularly suitable for data transmission of a one-way transmission link.
Fig. 4 is a schematic diagram illustrating an implementation of message order preservation under a unidirectional transport card structure according to an embodiment of the present disclosure. Referring to fig. 4, at a message sending end, a first host 10 includes a first network interface 11, a multi-core processor composed of a plurality of first cores 12, a plurality of first buffer queues 13 corresponding to the plurality of first cores 12 one to one, and a first unidirectional transmission card 14; the first unidirectional transport card 14 includes a first bus interface 51, a third buffer queue 52, a first processing circuit 53, and a transmission interface 54, which are connected in sequence, and the first bus interface 51 is connected to the plurality of first buffer queues 13 in the first host 10, respectively.
At the message receiving end, the second host 20 includes a second network interface 21, a multi-core processor composed of a plurality of second cores 22, a plurality of second buffer queues 23 corresponding to the plurality of second cores 22 one by one, and a second unidirectional transmission card 24; the second unidirectional transmission card 24 includes a second bus interface 61, a fourth buffer queue 62, a second processing circuit 63, and a receiving interface 64, which are connected in sequence, and the second bus interface 61 is connected to the plurality of second buffer queues 23 in the second host 20, respectively.
When the unidirectional transmission card with the structure is adopted, the first host 10 and the second host 20 realize message order preservation by adopting the following steps:
step 301: the first network interface receives a plurality of messages and distributes the plurality of messages to each first inner core.
Step 302: each first kernel processes the message distributed to the first kernel, encapsulates the processed message and the identifier of the kernel into a data frame, and stores the data frame in a first cache queue corresponding to the first kernel.
In practical application, the first kernel performs corresponding processing on the message according to actual processing requirements.
For example, the first hosts 10 have the numbers 0, 1, 2, and … … for the first cores 12, the packet processed by the first core 12 with the number 0 is encapsulated with the number 0 into a data frame, the packet processed by the first core 12 with the number 1 is encapsulated with the number 1 into a data frame, the packet processed by the first core 12 with the number 2 is encapsulated with the number 2 into a data frame, … …
Step 303: and the first host sends the data frames in each first buffer queue to the first unidirectional transmission card.
Optionally, this step 303 may include:
the first host reads the data frames in each first buffer queue in a polling mode and sequentially sends the data frames to the first one-way transmission card according to the reading sequence.
By sequentially reading the data frames in each buffer queue in a polling mode, the data frames simultaneously processed by the cores corresponding to each buffer queue can be sequentially sent to the one-way transmission card, and disorder caused by mixing of the data frames non-simultaneously processed by the cores is avoided.
Step 3041: the first bus interface receives a data frame.
In practical applications, the bus interface may adopt a PCIE (Peripheral Component interconnect express) interface.
Step 3042: the first unidirectional transmission card stores the data frame in a third buffer queue.
Step 3043: the first processing circuit reads the data frame in the third buffer queue and converts the data frame into a transmission signal.
In practical applications, the first processing circuit may implement encoding, encryption, and the like.
Step 3044: the transmission interface sends out the transmission signal.
Step 3051: the receiving interface receives a transmission signal.
Step 3052: the second processing circuit converts the transmission signal into a data frame and stores the data frame in a fourth buffer queue.
In this embodiment, the data frame includes the message and the identifier of the kernel.
In practical applications, the second processing circuit may implement decoding, decryption, and the like.
Step 3053: and the second bus interface sends out the data frame in the fourth buffer queue.
Step 306: and the second host stores the message in the data frame in a second cache queue corresponding to the second kernel corresponding to the identifier of the kernel in the data frame according to the identifier of the kernel in the data frame.
For example, the numbers of the first cores 12 in the first host 10 are respectively 0, 1, 2, and … …, the numbers of the second cores 22 in the second host 20 are respectively 0, 1, 2, and … …, the packet with the number of 0 in the data frame is stored in the second buffer queue corresponding to the second core 22 with the number of 0, the packet with the number of 1 in the data frame is stored in the second buffer queue corresponding to the second core 22 with the number of 1, the packet with the number of 2 in the data frame is stored in the second buffer queue corresponding to the second core 22 with the number of 2, and … …
Step 307: and the second kernel reads the message in the second cache queue corresponding to the second kernel for processing.
In practical application, the second kernel performs corresponding processing on the message according to actual processing requirements.
Step 308: and the second network interface sends out the processed message.
Fig. 5 is a schematic diagram illustrating an implementation of message order preservation under another unidirectional transport card structure according to an embodiment of the present disclosure. Referring to fig. 5, at a message sending end, a first host 10 includes a first network interface 11, a multi-core processor composed of a plurality of first cores 12, a plurality of first buffer queues 13 corresponding to the plurality of first cores 12 one to one, and a first unidirectional transmission card 14; the first unidirectional transport card 14 includes a first bus interface 51, a plurality of third buffer queues 52, a plurality of processing circuits 53 corresponding to the plurality of third buffer queues 52 one by one, a merging circuit 55, and a transmission interface 54, the first bus interface 51 is connected to the plurality of first buffer queues 13 in the first host 10, the plurality of third buffer queues 52 are connected to the first bus interface 51, the respective first processing circuits 53 are connected to the third buffer queues 52 corresponding to the first processing circuits 53, the merging circuit 55 is connected to the plurality of first processing circuits 53, and the transmission interface 54 is connected to the merging circuit 55.
At the message receiving end, the second host 20 includes a second network interface 21, a multi-core processor composed of a plurality of second cores 22, a plurality of second buffer queues 23 corresponding to the plurality of second cores 22 one by one, and a second unidirectional transmission card 24; the second unidirectional transmission card 24 includes a second bus interface 61, a plurality of fourth buffer queues 62, a plurality of second processing circuits 63 corresponding to the plurality of fourth buffer queues 62 one by one, an allocating circuit 65 and a receiving interface 64, which are connected in sequence, the second bus interface 61 is connected with the plurality of second buffer queues 23 in the second host 20, the plurality of fourth buffer queues 62 are connected with the second bus interface 61, each second processing circuit 63 is connected with the fourth buffer queue 62 corresponding to the second processing circuit 63, the allocating circuit 65 is connected with the plurality of second processing circuits 63, and the receiving interface 64 is connected with the allocating module 65.
The embodiment of the disclosure realizes the unidirectional transmission card by adopting a plurality of processing circuits, can effectively improve the processing speed and the processing performance of the unidirectional transmission card, and can not cause the problems of overhigh heat and overhigh cost.
Meanwhile, because the time for processing the messages by the processing circuits is different, the sequence of the messages entering the unidirectional transmission card may be different from the sequence of the messages exiting the unidirectional transmission card, so that the sequence of the messages in the first host 10 serving as a message receiving end is different from the sequence of the messages in the second host 20 serving as a message sending end, that is, a message disorder is generated.
When the unidirectional transmission card with the structure is adopted, the first host 10 and the second host 20 realize message order preservation by adopting the following steps:
step 401: the first network interface receives a plurality of messages and distributes the plurality of messages to each first inner core.
Step 402: each first kernel processes the message distributed to the first kernel, encapsulates the processed message and the identifier of the kernel into a data frame, and stores the data frame in a first cache queue corresponding to the first kernel.
Step 403: and the first host sends the data frames in each first buffer queue to the first unidirectional transmission card.
Step 4041: the first bus interface receives a plurality of data frames in sequence, sets labels for the data frames according to the sequence of receiving the data frames, and encapsulates the labels set for the data frames in the data frames.
Step 4042: the first unidirectional transmission card stores a plurality of data frames in a plurality of third buffer queues.
Step 4043: and each first processing circuit reads the data frame in the third buffer queue corresponding to the first processing circuit and converts the data frame into a transmission signal.
Step 4044: the combining circuit combines the transmission signals converted by the plurality of first processing circuits into one queue.
Step 4045: and the sending interface sends out the combined transmission signals in sequence.
Step 4051: the receiving interface receives a transmission signal.
Step 4052: the distribution circuit distributes the transmission signal to the plurality of second processing circuits.
Step 4053: each second processing circuit converts the transmission signal distributed to the second processing circuit into a data frame, and stores the data frame in a fourth buffer queue corresponding to the second processing circuit.
In this embodiment, the data frame includes the message, the identifier of the kernel, and the label.
Step 4054: and the second bus interface sequentially sends out the data frames in the fourth buffer queues according to the sequence of the small-to-large numbers in the data frames in the fourth buffer queues.
Optionally, step 4054 may include:
and the second bus interface reads the data frame with the minimum number in the fourth buffer queues in a polling mode and sends the data frame out.
The data frame with the minimum mark number in the plurality of buffer queues is read and sent out in a polling mode, so that the sequence of each message entering the one-way transmission card in the message sending end and the sequence of each message coming out of the one-way transmission card in the message receiving end are consistent, namely the sequence of the messages is preserved.
Step 406: and the second host stores the message in the data frame in a second cache queue corresponding to the second kernel corresponding to the identifier of the kernel in the data frame according to the identifier of the kernel in the data frame.
Step 407: and the second kernel reads the message in the second cache queue corresponding to the second kernel for processing.
Step 408: and the second network interface sends out the processed message.
In the message sending end, the label is set for each data frame according to the sequence of the unidirectional transmission card receiving a plurality of data frames, and the label set for the data frame is encapsulated in the data frame, so that at the message receiving end, the data frames can be sent out from the unidirectional transmission card in sequence according to the sequence of the labels from small to large, the sequence of each message entering the unidirectional transmission card in the message sending end and the sequence of each message coming out from the unidirectional transmission card in the message receiving end are consistent, the message order preservation is realized, the network transmission performance is favorably improved, and the method is particularly suitable for data transmission of a unidirectional transmission link.
The embodiment of the disclosure provides a device for preserving the order of messages. Referring to fig. 1, the apparatus is a first host 10 of a message sending end, where the first host 10 includes a first network interface 11, a multi-core processor composed of a plurality of first cores 12, a plurality of first buffers 13 corresponding to the plurality of first cores 12 one to one, and a first unidirectional transmission card 14; the plurality of first cores 12 are respectively connected with the first network interface 11, each first buffer 13 is respectively connected with the corresponding first core 12, and the first unidirectional transmission card 14 is respectively connected with the plurality of first buffers 13.
A first network interface 11, configured to receive multiple packets and distribute the multiple packets to each first core;
each first core 12 is configured to process the packet allocated to the first core, encapsulate the processed packet and the identifier of the core into a data frame, and store the data frame in a first cache queue corresponding to the first core;
a first host 10, configured to send the data frames in each first buffer queue to a first unidirectional transmission card;
and a first unidirectional transmission card 14 for converting the data frame into a transmission signal and transmitting the transmission signal onto an optical fiber.
In an implementation manner of the embodiment of the present disclosure, the first unidirectional transmission card may include a first bus interface, a third second cache queue, a first processing circuit, and a sending interface, which are connected in sequence, where the first bus interface is connected to the plurality of first cache queues respectively;
a first bus interface for receiving a data frame;
the first unidirectional transmission card is used for storing the data frame in the third buffer queue;
the first processing circuit is used for reading the data frame in the third buffer queue and converting the data frame into a transmission signal;
and the sending interface is used for sending out the transmission signal.
In another implementation manner of the embodiment of the present disclosure, the unidirectional transport card may include a first bus interface, a plurality of third cache queues, a plurality of first processing circuits corresponding to the plurality of third cache queues one to one, a merging circuit, and a transmission interface, where the first bus interface is connected to the plurality of first cache queues respectively, the plurality of third cache queues are connected to the first bus interface respectively, each first processing circuit is connected to a third cache queue corresponding to the first processing circuit, the merging circuit is connected to the plurality of first processing circuits respectively, and the transmission interface is connected to the merging circuit;
the first bus interface is used for receiving a plurality of data frames in sequence, setting labels for the data frames according to the sequence of receiving the data frames and packaging the labels set for the data frames in the data frames;
the first unidirectional transmission card is used for storing a plurality of data frames in a plurality of third buffer queues;
each first processing circuit is used for reading the data frame in the third buffer queue corresponding to the first processing circuit and converting the data frame into a transmission signal;
a combining circuit for combining the transmission signals converted by the plurality of processing circuits into one queue;
and the sending interface is used for sending out the combined transmission signals in sequence.
The embodiment of the disclosure provides a device for preserving the order of messages. Referring to fig. 1, the apparatus is a second host 20 at a message receiving end, where the second host 20 includes a second network interface 21, a multi-core processor composed of a plurality of second cores 22, a plurality of second buffers 23 corresponding to the plurality of second cores 22 one to one, and a second unidirectional transport card 24. The plurality of second cores 22 are respectively connected to the second network interface 21, each second buffer 23 is respectively connected to the corresponding second core 22, and the second unidirectional transmission card 24 is respectively connected to the plurality of second buffers 23.
A second unidirectional transmission card 24 for receiving the transmission signal from the optical fiber, converting the transmission signal into a data frame, and transmitting the data frame to the host; wherein, the data frame comprises a message and an identifier of a kernel;
the second host 20 is configured to store the packet in the data frame in a second cache queue corresponding to the second core corresponding to the identifier of the core in the data frame according to the identifier of the core in the data frame.
And the second core 22 is configured to read a packet in a second cache queue corresponding to the second core for processing.
And the second network interface 21 is configured to send out the processed packet.
In an implementation manner of the embodiment of the present disclosure, the unidirectional transmission card may include a second bus interface, a fourth buffer queue second, a processing circuit, and a receiving interface, which are connected in sequence, where the second bus interface is connected to the plurality of second buffer queues respectively;
a receiving interface for receiving a transmission signal;
the second processing circuit is used for converting the transmission signal into a data frame and storing the data frame in a fourth buffer queue; the data frame comprises a message and an identifier of the kernel;
and the second bus interface is used for sending out the data frames in the fourth buffer queue.
In another implementation manner of the embodiment of the present disclosure, the unidirectional transmission card may include a second bus interface, a plurality of fourth buffer queues, a plurality of second processing circuits corresponding to the plurality of fourth buffer queues one by one, an allocating circuit, and a receiving interface, where the second bus interface is connected to the plurality of second buffer queues respectively, the plurality of fourth buffer queues are connected to the second bus interface respectively, each second processing circuit is connected to a fourth buffer queue corresponding to the second processing circuit, the allocating circuit is connected to the plurality of second processing circuits respectively, and the receiving interface is connected to the allocating circuit;
a receiving interface for receiving a transmission signal;
a distribution circuit for distributing the transmission signal to the plurality of second processing circuits;
each second processing circuit is used for converting the transmission signals distributed to the second processing circuit into data frames and storing the data frames in a fourth buffer queue corresponding to the second processing circuit; the data frame comprises a message, the identifier and the label of the kernel;
and the second bus interface is used for sequentially sending out the data frames in the plurality of second buffer queues according to the sequence of the small-to-large numbers in the data frames in the plurality of fourth buffer queues.
The embodiment of the disclosure provides a system for preserving the order of messages. Referring to fig. 1, the system includes a first host 10 at a message transmitting end and a second host 20 at a message receiving end. The first host 10 and the second host 20 may be the same as the above-mentioned message order-preserving device, and are not described in detail herein.
It should be noted that: in the message order preserving apparatus provided in the foregoing embodiment, only the division of the functional modules is illustrated in the foregoing, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus for preserving an order of a message and the method embodiment for preserving an order of a message provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiment and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, so that any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A method for preserving the order of a message is suitable for a first host of a message sending end, wherein the first host comprises a first network interface, a multi-core processor consisting of a plurality of first cores, a plurality of first cache queues in one-to-one correspondence with the first cores, and a first one-way transmission card; each first core is connected with the first network interface, each first cache queue is connected with the first core corresponding to the first cache queue, and the first unidirectional transmission card is connected with the plurality of first cache queues; characterized in that the method comprises:
the first network interface receives a plurality of messages and distributes the messages to the first cores;
each first kernel processes the message distributed to the first kernel, encapsulates the processed message and the identifier of the kernel into a data frame, and stores the data frame in a first cache queue corresponding to the first kernel;
the first host sends the data frames in each first buffer queue to the first unidirectional transmission card;
and the first one-way transmission card converts the data frame into a transmission signal and sends the transmission signal to an optical fiber.
2. The method according to claim 1, wherein the first unidirectional transport card comprises a first bus interface, a plurality of third buffer queues, a plurality of first processing circuits corresponding to the plurality of third buffer queues one by one, a merging circuit, and a transmitting interface, wherein the first bus interface is connected to the plurality of first buffer queues respectively, the plurality of third buffer queues are connected to the first bus interface respectively, each of the first processing circuits is connected to a third buffer queue corresponding to the first processing circuit, the merging circuit is connected to the plurality of first processing circuits respectively, and the transmitting interface is connected to the merging circuit;
the first unidirectional transmission card converts the data frame into a transmission signal and sends the transmission signal to an optical fiber, and the method comprises the following steps:
the first bus interface receives a plurality of data frames in sequence, sets a label for each data frame according to the sequence of receiving the plurality of data frames, and encapsulates the label set for the data frame in the data frame;
the first unidirectional transmission card stores a plurality of data frames in the plurality of third buffer queues;
each first processing circuit reads a data frame in a third buffer queue corresponding to the first processing circuit and converts the data frame into a transmission signal;
the merging circuit merges the transmission signals converted by the plurality of first processing circuits into a queue;
and the sending interface sends out the combined transmission signals in sequence.
3. A message order-preserving method is applicable to a second host of a message receiving end, wherein the second host comprises a second network interface, a multi-core processor consisting of a plurality of second cores, a plurality of second cache queues in one-to-one correspondence with the second cores, and a second one-way transmission card; each second core is connected with the second network interface, each second cache queue is connected with the second core corresponding to the second cache queue, and the second unidirectional transmission card is connected with the plurality of second cache queues; characterized in that the method comprises:
the second unidirectional transmission card receives a transmission signal from an optical fiber and converts the transmission signal into a data frame; the data frame comprises a message and an identifier of a kernel;
the second host stores the message in the data frame in a second cache queue corresponding to a second kernel corresponding to the identifier of the kernel in the data frame according to the identifier of the kernel in the data frame;
the second kernel reads the message in a second cache queue corresponding to the second kernel for processing;
and the second network interface sends out the processed message.
4. The method according to claim 3, wherein the second unidirectional transport card includes a second bus interface, a plurality of fourth buffer queues, a plurality of second processing circuits corresponding to the plurality of fourth buffer queues in a one-to-one manner, an allocating circuit, and a receiving interface, the second bus interface is connected to the plurality of second buffer queues, the plurality of fourth buffer queues are connected to the second bus interface, each of the plurality of second processing circuits is connected to a fourth buffer queue corresponding to the second processing circuit, the allocating circuit is connected to the plurality of second processing circuits, and the receiving interface is connected to the allocating circuit;
the second unidirectional transmission card receives a transmission signal from an optical fiber, converts the transmission signal into a data frame, and comprises:
the receiving interface receives the transmission signal;
the distribution circuit distributes the transmission signal to the plurality of second processing circuits;
each second processing circuit converts the transmission signal distributed to the second processing circuit into a data frame and stores the data frame in a fourth buffer queue corresponding to the second processing circuit; the data frame comprises a message, an identifier and a label of a kernel;
and the second bus interface sequentially sends out the data frames in the fourth buffer queues according to the sequence of the small-to-large numbers in the data frames in the fourth buffer queues.
5. A device for preserving the order of messages is a first host of a message sending end, and the first host comprises a first network interface, a multi-core processor consisting of a plurality of first cores, a plurality of first cache queues in one-to-one correspondence with the first cores, and a first one-way transmission card; each first core is connected with the first network interface, each first cache queue is connected with the first core corresponding to the first cache queue, and the first unidirectional transmission card is connected with the plurality of first cache queues; it is characterized in that the preparation method is characterized in that,
the first network interface is configured to receive a plurality of packets, and distribute the plurality of packets to each of the first cores;
each first core is configured to process the packet allocated to the first core, encapsulate the processed packet and the identifier of the core into a data frame, and store the data frame in a first cache queue corresponding to the first core;
the first host is used for sending the data frames in each first buffer queue to the first unidirectional transmission card;
and the first one-way transmission card is used for converting the data frame into a transmission signal and sending the transmission signal to an optical fiber.
6. The apparatus according to claim 5, wherein the first unidirectional transport card comprises a first bus interface, a plurality of third buffer queues, a plurality of first processing circuits corresponding to the plurality of third buffer queues one by one, a merging circuit, and a transmitting interface, the first bus interface is connected to the plurality of first buffer queues respectively, the plurality of third buffer queues are connected to the first bus interface respectively, each of the plurality of first processing circuits is connected to a third buffer queue corresponding to the first processing circuit, the merging circuit is connected to the plurality of first processing circuits respectively, and the transmitting interface is connected to the merging circuit;
the first bus interface is used for receiving a plurality of data frames in sequence, setting labels for the data frames according to the sequence of receiving the data frames, and encapsulating the labels set for the data frames in the data frames;
the first unidirectional transmission card is used for storing a plurality of data frames in the plurality of third buffer queues;
each first processing circuit is configured to read a data frame in a third buffer queue corresponding to the first processing circuit, and convert the data frame into a transmission signal;
the merging circuit is used for merging the transmission signals converted by the plurality of first processing circuits into a queue;
and the sending interface is used for sending out the combined transmission signals in sequence.
7. A message order-preserving device is a second host of a message receiving end, and the second host comprises a second network interface, a multi-core processor consisting of a plurality of second cores, a plurality of second cache queues in one-to-one correspondence with the second cores, and a second one-way transmission card; each second core is connected with the second network interface, each second cache queue is connected with the second core corresponding to the second cache queue, and the second unidirectional transmission card is connected with the plurality of second cache queues; it is characterized in that the preparation method is characterized in that,
the second unidirectional transmission card is used for receiving a transmission signal from an optical fiber and converting the transmission signal into a data frame; the data frame comprises a message and an identifier of a kernel;
the second host is used for storing the message in the data frame in a second cache queue corresponding to a second kernel corresponding to the identifier of the kernel in the data frame according to the identifier of the kernel in the data frame;
the second core is used for reading the message in a second cache queue corresponding to the second core for processing;
and the second network interface is used for sending out the processed message.
8. The apparatus according to claim 7, wherein the second unidirectional transport card includes a second bus interface, a plurality of fourth buffer queues, a plurality of second processing circuits corresponding to the plurality of fourth buffer queues in a one-to-one manner, an allocating circuit, and a receiving interface, the second bus interface is connected to the plurality of second buffer queues respectively, the plurality of fourth buffer queues are connected to the second bus interface respectively, each of the plurality of second processing circuits is connected to a fourth buffer queue corresponding to the second processing circuit, the allocating circuit is connected to the plurality of second processing circuits respectively, and the receiving interface is connected to the allocating circuit;
the receiving interface is used for receiving the transmission signal;
the distribution circuit is used for distributing the transmission signals to the plurality of second processing circuits;
each second processing circuit is configured to convert the transmission signal allocated to the second processing circuit into a data frame, and store the data frame in a fourth buffer queue corresponding to the second processing circuit; the data frame comprises a message, an identifier and a label of a kernel;
and the second bus interface is used for sequentially sending out the data frames in the fourth buffer queues according to the sequence of the small-to-large numbers in the data frames in the fourth buffer queues.
9. A message order-preserving system comprises a first host of a message sending end and a second host of a message receiving end; the first host comprises a first network interface, a multi-core processor consisting of a plurality of first cores, a plurality of first cache queues in one-to-one correspondence with the plurality of first cores, and a first unidirectional transmission card; each first core is connected with the first network interface, each first cache queue is connected with the first core corresponding to the first cache queue, and the first unidirectional transmission card is connected with the plurality of first cache queues; the second host comprises a second network interface, a multi-core processor consisting of a plurality of second cores, a plurality of second cache queues in one-to-one correspondence with the plurality of second cores, and a second unidirectional transmission card; each second core is connected with the second network interface, each second cache queue is connected with the second core corresponding to the second cache queue, and the second unidirectional transmission card is connected with the plurality of second cache queues; it is characterized in that the preparation method is characterized in that,
the first network interface is configured to receive a plurality of packets, and distribute the plurality of packets to each of the first cores;
each first core is configured to process the packet allocated to the first core, encapsulate the processed packet and the identifier of the core into a data frame, and store the data frame in a first cache queue corresponding to the first core;
the first host is used for sending the data frames in each first buffer queue to the first unidirectional transmission card;
the first unidirectional transmission card is used for converting the data frame into a transmission signal and sending the transmission signal to an optical fiber;
the second unidirectional transmission card is used for receiving a transmission signal from an optical fiber and converting the transmission signal into a data frame; the data frame comprises a message and an identifier of a kernel;
the second host is used for storing the message in the data frame in a second cache queue corresponding to a second kernel corresponding to the identifier of the kernel in the data frame according to the identifier of the kernel in the data frame;
the second core is used for reading the message in a second cache queue corresponding to the second core for processing;
and the second network interface is used for sending out the processed message.
10. The system according to claim 9, wherein the first unidirectional transport card comprises a first bus interface, a plurality of third buffer queues, a plurality of first processing circuits corresponding to the plurality of third buffer queues one by one, a merging circuit, and a transmission interface, the first bus interface is connected to the plurality of first buffer queues respectively, the plurality of third buffer queues are connected to the first bus interface respectively, each of the plurality of first processing circuits is connected to the third buffer queue corresponding to the first processing circuit, the merging circuit is connected to the plurality of first processing circuits respectively, and the transmission interface is connected to the merging circuit; the second unidirectional transmission card comprises a second bus interface, a plurality of fourth buffer queues, a plurality of second processing circuits in one-to-one correspondence with the fourth buffer queues, a distribution circuit and a receiving interface, wherein the second bus interface is respectively connected with the second buffer queues, the fourth buffer queues are respectively connected with the second bus interface, each second processing circuit is connected with the fourth buffer queue corresponding to the second processing circuit, the distribution circuit is respectively connected with the second processing circuits, and the receiving interface is connected with the distribution circuit;
the first bus interface is used for receiving a plurality of data frames in sequence, setting labels for the data frames according to the sequence of receiving the data frames, and encapsulating the labels set for the data frames in the data frames;
the first unidirectional transmission card is used for storing a plurality of data frames in the plurality of third buffer queues;
each first processing circuit is configured to read a data frame in a third buffer queue corresponding to the first processing circuit, and convert the data frame into a transmission signal;
the merging circuit is used for merging the transmission signals converted by the plurality of first processing circuits into a queue;
the transmission interface is used for sequentially transmitting the combined transmission signals;
the receiving interface is used for receiving the transmission signal;
the distribution circuit is used for distributing the transmission signals to the plurality of second processing circuits;
each second processing circuit is configured to convert the transmission signal allocated to the second processing circuit into a data frame, and store the data frame in a fourth buffer queue corresponding to the second processing circuit; the data frame comprises a message, an identifier and a label of a kernel;
and the second bus interface is used for sequentially sending out the data frames in the fourth buffer queues according to the sequence of the small-to-large numbers in the data frames in the fourth buffer queues.
CN201911110264.1A 2019-11-14 2019-11-14 Message order preserving method, device and system Active CN110830386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911110264.1A CN110830386B (en) 2019-11-14 2019-11-14 Message order preserving method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911110264.1A CN110830386B (en) 2019-11-14 2019-11-14 Message order preserving method, device and system

Publications (2)

Publication Number Publication Date
CN110830386A true CN110830386A (en) 2020-02-21
CN110830386B CN110830386B (en) 2023-06-30

Family

ID=69554853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911110264.1A Active CN110830386B (en) 2019-11-14 2019-11-14 Message order preserving method, device and system

Country Status (1)

Country Link
CN (1) CN110830386B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114363370A (en) * 2021-12-29 2022-04-15 中汽创智科技有限公司 Vehicle-mounted equipment communication method, device and system and vehicle
WO2022199558A1 (en) * 2021-03-22 2022-09-29 华为技术有限公司 Data transmission method, and related apparatus and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070121659A1 (en) * 2005-11-30 2007-05-31 Broadcom Corporation Ring-based cache coherent bus
CN1996958A (en) * 2006-12-30 2007-07-11 华为技术有限公司 Method and device for guaranteeing message sequence
CN102055649A (en) * 2009-10-29 2011-05-11 成都市华为赛门铁克科技有限公司 Method, device and system for treating messages of multi-core system
JP2011237929A (en) * 2010-05-07 2011-11-24 Toyota Motor Corp Multi-core processor
CN103067304A (en) * 2012-12-27 2013-04-24 华为技术有限公司 Method and device of message order-preserving
CN104468401A (en) * 2014-11-20 2015-03-25 华为技术有限公司 Message processing method and device
CN109246021A (en) * 2018-09-18 2019-01-18 武汉海晟科讯科技有限公司 A kind of Point-to-Point Data reliable transmission system and method based on FPGA
CN109327405A (en) * 2017-07-31 2019-02-12 迈普通信技术股份有限公司 Message order-preserving method and the network equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070121659A1 (en) * 2005-11-30 2007-05-31 Broadcom Corporation Ring-based cache coherent bus
CN1996958A (en) * 2006-12-30 2007-07-11 华为技术有限公司 Method and device for guaranteeing message sequence
CN102055649A (en) * 2009-10-29 2011-05-11 成都市华为赛门铁克科技有限公司 Method, device and system for treating messages of multi-core system
JP2011237929A (en) * 2010-05-07 2011-11-24 Toyota Motor Corp Multi-core processor
CN103067304A (en) * 2012-12-27 2013-04-24 华为技术有限公司 Method and device of message order-preserving
CN104468401A (en) * 2014-11-20 2015-03-25 华为技术有限公司 Message processing method and device
CN109327405A (en) * 2017-07-31 2019-02-12 迈普通信技术股份有限公司 Message order-preserving method and the network equipment
CN109246021A (en) * 2018-09-18 2019-01-18 武汉海晟科讯科技有限公司 A kind of Point-to-Point Data reliable transmission system and method based on FPGA

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李杰: "基于多核 NPU 的 TCP 数据传输卸载" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022199558A1 (en) * 2021-03-22 2022-09-29 华为技术有限公司 Data transmission method, and related apparatus and device
CN114363370A (en) * 2021-12-29 2022-04-15 中汽创智科技有限公司 Vehicle-mounted equipment communication method, device and system and vehicle
CN114363370B (en) * 2021-12-29 2023-12-26 中汽创智科技有限公司 Vehicle-mounted device communication method, device and system and vehicle

Also Published As

Publication number Publication date
CN110830386B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
US5828835A (en) High throughput message passing process using latency and reliability classes
US10868767B2 (en) Data transmission method and apparatus in optoelectronic hybrid network
JP5828966B2 (en) Method, apparatus, system, and storage medium for realizing packet transmission in a PCIE switching network
CN113728596A (en) System and method for facilitating efficient management of idempotent operations in a Network Interface Controller (NIC)
US20150229568A1 (en) Stateless Fibre Channel Sequence Acceleration for Fibre Channel Traffic Over Ethernet
US7924708B2 (en) Method and apparatus for flow control initialization
CN101894092B (en) Multi-core CPU and inter-core communication method thereof
US11095626B2 (en) Secure in-line received network packet processing
US10225196B2 (en) Apparatus, system and method for controlling packet data flow
CN110830386B (en) Message order preserving method, device and system
CN109684269A (en) A kind of PCIE exchange chip kernel and working method
CN104378161A (en) FCoE protocol acceleration engine IP core based on AXI4 bus formwork
CN109951750A (en) Data processing method and system based on mono- layer of cross architecture of FlexE
US20120102149A1 (en) Inter-Board Communication Apparatus, Method for Transmitting and Receiving Message of Inter-Board Communication
CN109286564B (en) Message forwarding method and device
US11038856B2 (en) Secure in-line network packet transmittal
US20150199298A1 (en) Storage and network interface memory share
CN116471242A (en) RDMA-based transmitting end, RDMA-based receiving end, data transmission system and data transmission method
CN113051212B (en) Graphics processor, data transmission method, data transmission device, electronic equipment and storage medium
WO2019015487A1 (en) Data retransmission method, rlc entity and mac entity
CN110928828B (en) Inter-processor service processing system
CN108614786B (en) Channel management circuit based on message service type
CN103118023B (en) A kind of method and system of the data of transmission specification in a network
CN108880738B (en) Information processing method and device
Chiola et al. Porting MPICH ADI on GAMMA with flow control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant