CN113691459A - Data transmission method and device based on identification message - Google Patents

Data transmission method and device based on identification message Download PDF

Info

Publication number
CN113691459A
CN113691459A CN202010427447.2A CN202010427447A CN113691459A CN 113691459 A CN113691459 A CN 113691459A CN 202010427447 A CN202010427447 A CN 202010427447A CN 113691459 A CN113691459 A CN 113691459A
Authority
CN
China
Prior art keywords
data stream
data
unit
identification
units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010427447.2A
Other languages
Chinese (zh)
Inventor
路小刚
高红亮
李东锋
涂伯颜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010427447.2A priority Critical patent/CN113691459A/en
Publication of CN113691459A publication Critical patent/CN113691459A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/31Flow control; Congestion control by tagging of packets, e.g. using discard eligibility [DE] bits

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a data transmission method and a device based on an identification message, wherein the method comprises the following steps: the method comprises the steps that a sending end generates a plurality of data flow units according to sending data, each data flow unit in the plurality of data flow units comprises one or more identification messages, and the identification messages are used for marking messages which are used for preventing transmission time sequence errors from being generated at a receiving end in the data flow units; and the sending end sends a plurality of data flow units through load sharing. And the receiving end receives the plurality of data stream units, and acquires the sending data of the sending end according to the identification message, wherein the sending data comprises the plurality of data stream units with correct time sequence. The embodiment of the application discloses a method for solving the problem of transmission message time sequence errors possibly caused by load sharing in the data transmission process through an identification message, and the data transmission efficiency is improved.

Description

Data transmission method and device based on identification message
Technical Field
The present application relates to the field of communications technologies, and in particular, to a data transmission method and apparatus based on an identification packet.
Background
When data flow (flow) is controlled to be sent by a Transmission Control Protocol (TCP), a certain time interval exists after each batch of messages is sent due to the limitation of a sending algorithm. The load sharing link can be adjusted by using the interval time so as to avoid the problem of message transmission disorder. The load sharing refers to that data streams in a network are distributed to a plurality of paths for transmission, so as to expand network bandwidth, increase throughput and improve flexibility and availability of the network. The load sharing link refers to a communication link for transmitting by adopting load sharing between the sending end and the receiving end. And load sharing by the foregoing principle is called data small flow (flowet) load sharing.
In the process of flow transmission, link time delays corresponding to load sharing links may be different, and a link time delay difference T may exist between a plurality of load sharing links, and due to a TCP transmission mechanism, a time interval may exist between two consecutive packets of the same data flow, if the time interval is greater than T, a flow is segmented, multipath load sharing is performed with the flow as a granularity, and an order can be strictly preserved at the end. However, the passive splitting flow method has the problem that the flow quantity allocated to a certain path is uncontrollable, and the load sharing is unbalanced. Therefore, a method of dividing data flow units (flowcells) is proposed to perform load sharing, but the load sharing method does not consider link delay difference, so that messages arriving at a receiving end may generate disorder, and thus, the problem of path end sequencing is generated, sequencing pressure is large, and transmission efficiency is low.
Disclosure of Invention
The embodiment of the application provides a data transmission method and device based on an identification message, which are used for solving the problem of transmission message time sequence errors possibly caused by load sharing in the data transmission process and improving the data transmission efficiency.
In a first aspect, a data transmission method based on an identification packet is provided, where the method includes:
generating a plurality of data flow units according to sending data, wherein each data flow unit comprises one or more identification messages, and the identification messages are used for marking the messages for preventing transmission time sequence errors from being generated at a receiving end in the data flow units;
and transmitting the plurality of data stream units through load sharing.
In one possible design, the initial packet of the identification packet is the first transmission packet of the data flow unit.
In one possible design, the plurality of data stream units are generated by dividing the transmission data according to a first data stream unit length.
In one possible design, the multiple data stream units are generated by dividing a first data streamlet according to a first data stream unit length, and the first data streamlet is a data streamlet with a length greater than or equal to a first preset length in the sending data.
In one possible design, the first data flow unit length includes a packet duration, a packet byte length, or a packet number of the first data flow unit.
In one possible design, the method further includes:
and acquiring time delay differences among a plurality of transmission links corresponding to the plurality of data stream units, and determining the length of the identification message according to the time delay differences, wherein the transmission links are transmission links corresponding to the data stream units which are transmitted by load sharing.
In a possible design, the determining the length of the identification packet according to the delay difference includes:
and determining the length of the identification message according to the maximum time delay difference among the plurality of transmission links.
In a possible design, the determining the length of the identification packet according to the delay difference includes:
determining a first transmission link and a second transmission link corresponding to adjacent data stream units in the plurality of data stream units, wherein the adjacent data stream units include a first data stream unit and a second data stream unit, and the second data stream unit is a data stream unit sent after the first data stream unit; and determining the length of the identification message of the second data stream unit according to the time delay difference between the first transmission link and the second transmission link.
In a possible design, data stream units corresponding to a plurality of sending data are subjected to decentralized sequencing, so that the time for the identification messages of the data stream units to reach the receiving end is different.
In a second aspect, a data transmission method based on an identification packet is provided, where the method includes:
receiving a plurality of data stream units, wherein each data stream unit in the plurality of data stream units comprises one or more identification messages, and the identification messages are used for marking the messages for preventing transmission sequence errors from being generated at a receiving end in the data stream units;
and acquiring sending data of a sending end according to the identification message, wherein the sending data comprises a plurality of data flow units with correct time sequence.
In one possible design, the initial packet of the identification packet is the first transmission packet of the data flow unit.
In a possible design, the determining, according to the packet identifier, an order of acquiring the plurality of data flow units includes: when an identification message of a third data stream unit in the plurality of data stream units is received, caching the identification message; and when receiving the non-identification message of the third data flow unit, acquiring the third data flow unit from the initial message of the identification message.
In a possible design, the determining, according to the packet identifier, an order of acquiring the plurality of data flow units includes: when receiving an identification message of a fourth data stream unit in the plurality of data stream units, caching the identification message; when receiving a tail identification packet of a fifth data stream unit of the multiple data stream units, acquiring the fourth data stream unit from a start packet of the identification packet of the fourth data stream unit, where the fourth data stream unit is an adjacent data stream unit sent before the fourth data stream unit.
In a third aspect, a communication device is provided, the device comprising a processing unit and a transmitting unit, wherein,
the processing unit is used for generating a plurality of data flow units according to the sending data, the data flow units comprise one or more identification messages, and the identification messages are used for marking the messages which prevent transmission sequence errors from being generated at a receiving end in the data flow units; the sending unit is configured to send the multiple data stream units by load sharing.
In a possible design, the apparatus further includes a receiving unit, configured to obtain a delay difference between multiple transmission links corresponding to the multiple data stream units;
the processing unit is further to: and determining the length of the identification message according to the time delay difference, wherein the transmission link is a transmission link corresponding to the data stream unit which is transmitted by load sharing.
In one possible design, the processing unit is specifically configured to: and determining the length of the identification message according to the maximum time delay difference among the plurality of transmission links.
In one possible design, the processing unit is specifically configured to: determining a first transmission link and a second transmission link corresponding to adjacent data stream units in the plurality of data stream units, wherein the adjacent data stream units include a first data stream unit and a second data stream unit, and the second data stream unit is a data stream unit sent after the first data stream unit;
and determining the length of the identification message of the second data stream unit according to the time delay difference between the first transmission link and the second transmission link.
In one possible design, the processing unit is further configured to: and performing scattered sequencing on data stream units corresponding to a plurality of sending data respectively to ensure that the time for the identification messages of the data stream units to reach a receiving end is different.
In a fourth aspect, a communication apparatus is provided, the apparatus comprising a receiving unit and a processing unit, wherein,
the receiving unit is configured to receive a plurality of data stream units, where each data stream unit in the plurality of data stream units includes one or more identification messages, and the identification messages are used to mark a message in the data stream unit that prevents a transmission timing error from being generated at a receiving end;
and the processing unit is used for acquiring the sending data of the sending end according to the identification message, wherein the sending data comprises a plurality of data flow units with correct time sequence.
In one possible design, the processing unit is specifically configured to:
when an identification message of a third data stream unit in the plurality of data stream units is received, caching the identification message; and when receiving the non-identification message of the third data flow unit, acquiring the third data flow unit from the initial message of the identification message.
In one possible design, the processing unit is specifically configured to:
when receiving an identification message of a fourth data stream unit in the plurality of data stream units, caching the identification message; when receiving a tail identification packet of a fifth data stream unit of the multiple data stream units, acquiring the fourth data stream unit from a start packet of the identification packet of the fourth data stream unit, where the fourth data stream unit is an adjacent data stream unit sent before the fourth data stream unit.
In a fifth aspect, a communications apparatus is provided, the apparatus comprising at least one processor coupled with at least one memory:
the at least one processor configured to execute computer programs or instructions stored in the at least one memory to cause the apparatus to perform the method of any of the first aspect or the first aspect; or cause the apparatus to perform the method of any of the second aspect or the second aspect.
The device may be a router or a chip included in the router. The functions of the communication device can be realized by hardware, and can also be realized by executing corresponding software by hardware, wherein the hardware or the software comprises one or more modules corresponding to the functions.
In a sixth aspect, an embodiment of the present application provides a chip system, including: a processor coupled to a memory for storing a program or instructions that, when executed by the processor, cause the system-on-chip to implement the method of the first aspect or any of the possible implementations of the first aspect or the method of any of the possible implementations of the second aspect or the second aspect.
Optionally, the system-on-chip further comprises an interface circuit for interacting code instructions to the processor.
Optionally, the number of processors in the chip system may be one or more, and the processors may be implemented by hardware or software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory.
Optionally, the memory in the system-on-chip may also be one or more. The memory may be integrated with the processor or may be separate from the processor, which is not limited in this application. For example, the memory may be a non-transitory processor, such as a read only memory ROM, which may be integrated with the processor on the same chip or separately disposed on different chips, and the type of the memory and the arrangement of the memory and the processor are not particularly limited in this application.
In a seventh aspect, the present application provides a computer-readable storage medium, on which a computer program or instructions are stored, which, when executed, cause a computer to perform the method of the first aspect or any one of the possible implementations of the first aspect, or the second aspect or any one of the possible implementations of the second aspect.
In an eighth aspect, an embodiment of the present application provides a computer program product, which, when read and executed by a computer, causes the computer to perform the method in the first aspect or any one of the possible implementations of the first aspect, or perform the method in the second aspect or any one of the possible implementations of the second aspect.
In a ninth aspect, an embodiment of the present application provides a communication system, which includes the communication apparatus in the third aspect and the communication apparatus in the fourth aspect.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required for the embodiments will be briefly described below.
Fig. 1 is a schematic diagram of an outwardly-extended routing architecture according to an embodiment of the present application;
fig. 2 is a schematic diagram of flow load sharing according to an embodiment of the present application;
fig. 3A is a flowchart of a data transmission method based on an identification packet according to an embodiment of the present application;
fig. 3B is a schematic diagram of a partitioned data stream unit according to an embodiment of the present application;
fig. 3C is a schematic diagram of another partitioned data stream unit provided in the embodiment of the present application;
fig. 3D is a schematic diagram of a partitioned data stream unit according to an embodiment of the present application;
FIG. 3E is a schematic diagram of a data flow unit identifier according to an embodiment of the present invention;
fig. 3F is a schematic diagram of processing an identification packet by a receiving end according to an embodiment of the present application;
fig. 3G is a schematic diagram of another receiving end processing an identification packet according to the embodiment of the present application;
fig. 3H is a schematic diagram of a transmitting end corresponding to multiple pieces of transmitted data according to an embodiment of the present application;
fig. 3I is a schematic diagram of an implementation process of a data transmission method based on an identification packet according to an embodiment of the present application;
fig. 4 is a schematic diagram of a communication device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a communication device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a communication device according to an embodiment of the present application.
Detailed Description
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to the listed steps or modules but may alternatively include other steps or modules not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
"plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
For the understanding of the embodiment of the present application, the system architecture of the embodiment of the present application is first described with reference to fig. 1.
Fig. 1 is a schematic diagram of an outwardly-extended routing architecture provided in an embodiment of the present application, and as shown in fig. 1, a router whose overall switching capacity is easy to extend is referred to as an outwardly-extended (scale out) router by using a close architecture combination. In the Scale out router architecture, a plurality of routers form a router for use, and the routers adopt standard Ethernet for packet switching. To improve the throughput of scale out routers, a more uniform load sharing approach is needed.
The packet-by-packet load sharing is based on load sharing of messages (or data packets) on multiple links, but due to the delay difference between the links, all messages arriving at the end may be out of order, each message has the possibility of being ordered, and the pressure of ordering at the end is high.
Based on this, flow load sharing is proposed, please refer to fig. 2, and fig. 2 is a schematic diagram of flow load sharing provided in the embodiment of the present application, as shown in fig. 2, a close architecture includes a plurality of routing nodes, a transmission delay of a communication link 1 between a node 1 and a node 2 is T1, a transmission delay of a communication link 2 between a node 1 and a node 2 is T2, and T2-T1, if a time interval between two consecutive packets of the same flow is greater than or equal to T, the flow is split, multipath load sharing is performed with the flow as a granularity, and an order can be strictly preserved at an end. However, the passive splitting flow method has the problem that the flow quantity allocated to a certain path is uncontrollable, and the load sharing is unbalanced.
Furthermore, flowcell load sharing is proposed, a server network card takes a tcp segment (63KB) as a flowcell, one flowcell comprises 43 data packets with a packet length of 1500B, an intermediate network is hashed to a plurality of tree-shaped network structures according to the flowcell, and the flowcell is used as a unit for load sharing, which is similar to packet-by-packet load sharing and is uniform in sharing. The load sharing of the method is similar to the packet-by-packet load sharing, the tail end needs to be sequenced, and the sequencing pressure is not effectively controlled.
Based on this, please refer to fig. 3A, fig. 3A is a flowchart of a data transmission method based on an identification packet according to an embodiment of the present application, and as shown in fig. 3A, the method includes the following steps:
101. a sending end generates a plurality of data flow units according to sending data, wherein each data flow unit comprises one or more identification messages, and each identification message is used for marking a message which prevents transmission time sequence errors from being generated at a receiving end in the data flow unit;
102. and the sending end sends the plurality of data stream units through load sharing.
103. A receiving end receives a plurality of data stream units, wherein each data stream unit in the plurality of data stream units comprises one or more identification messages, and the identification messages are used for marking the messages which prevent transmission time sequence errors from being generated at the receiving end in the data stream units;
104. and the receiving end acquires the sending data of the sending end according to the identification message, wherein the sending data comprises a plurality of data flow units with correct time sequence.
The sending end and the receiving end represent routing nodes for communication, wherein the sending end can represent an initial node for generating a message, can also represent an intermediate node for receiving and forwarding the message, and the receiving end can represent a terminal node for receiving the message and using data to perform service processing, and can also represent an intermediate node for receiving and forwarding the message.
The minimum unit of the transmission data of the transmitting end, which is transmitted in the network layer, is a data packet, and the minimum unit of the transmission data of the transmission layer is a data packet, so that a plurality of data stream units generated according to the transmission data may include a plurality of data packets or data packets. The following description will take data messages as an example.
Referring to fig. 3B, fig. 3B is a schematic diagram of dividing a data stream unit according to an embodiment of the present application, as shown in fig. 3B, the transmission data is transmitted in a data stream form, a packet in the data stream is divided into a plurality of data stream units according to a first data stream unit length, that is, the divided data stream unit length is equal to a first data stream unit length, and if a length of a last remaining packet in the data stream is smaller than the first data stream unit length, the remaining packet is divided into a last data stream unit. For example, the first data stream unit length is M, the data stream is divided according to the first data stream unit length, and the obtained data stream unit lengths are (M, M., M '), respectively, where M' ≦ M. The length of the first data flow unit can be determined according to the link delay difference, the ratio of the length of the data flow unit to the link delay difference is controlled, and the proportion of messages needing to be sorted and messages needing not to be sorted can be controlled so as to meet the processing capacity of a receiving end.
When the data stream unit is divided, the data stream unit may be divided, or the data stream unit may be divided into data streamlets. Specifically referring to fig. 3C, as shown in (a) of fig. 3C, data stream unit division is performed by using a data stream as a unit, and a plurality of data stream units are divided for a message in a data stream 1 according to a first data stream unit length or a preset number of data stream units, regardless of the size of an interval between messages. The method for dividing all messages in the data stream into data stream units can improve the load sharing balance in each link; alternatively, as shown in (b) of fig. 3C, data flow cell division is performed in units of data streamlets, and in data flow 1, when the interval between packets is greater than the link delay difference T, data streamlets 1 and 2 are formed, where the length of flowlet1 is short, data flow cell division is not performed, and the length of flowlet2 is greater than or equal to the first preset length, and if flowlet2 is directly transmitted through load sharing, load sharing imbalance may be caused, so flowcells need to be divided into flowlets 2. The method for dividing the data flow unit only by the flowet with the length greater than or equal to the first preset length in the data flow can reduce the times of dividing the data flow unit, achieve the effect of balanced load sharing and improve the data processing efficiency.
In addition, the data flow may be a natural flow with the same five-tuple (source IP address, source port, destination IP address, destination port, and transport layer protocol), for example, a data flow corresponding to the same service. However, in a communication system, which is usually used for simultaneously performing multiple services, the number of corresponding natural streams is large, and if each natural stream is divided into separate data stream units, a large amount of overhead may be caused. Therefore, an aggregate stream can be generated by aggregating a plurality of natural streams, and then the aggregate stream is used as one data stream to perform data stream unit division and subsequent data processing. The manner of generating the aggregation flow may be to perform hash operation on the quintuple to obtain a corresponding numerical value, and then form an aggregation flow from natural flows having the same numerical value.
Referring to fig. 3D, fig. 3D is a schematic diagram of a data flow dividing unit provided in this embodiment, as shown in (a) in fig. 3D, the dividing according to the length of the first data flow unit may be dividing according to the message duration of the first data flow unit, and then the flowcell ID1 and the flowcell ID2 are two data flow units that are divided according to the message duration of the first data flow unit and have the same message duration, for example, the message durations are both T'. Or as shown in (b) in fig. 3D, the dividing according to the length of the first data flow unit may be dividing according to the message byte length of the first data flow unit, and the flowcell ID1 and the flowcell ID2 are two data flow units which are divided according to the message byte length of the first data flow unit and have the same message byte length, for example, the message byte lengths are both 30K. Or as shown in (c) in fig. 3D, the dividing according to the length of the first data flow unit may be dividing according to the number of packets of the first data flow unit, and then the flowcell ID1 and the flowcell ID2 are two data flow units that are divided according to the number of packets of the first data flow unit and have the same number of packets, and for example, both include 16 packets.
Referring to fig. 3E, fig. 3E is a schematic diagram of data flow cell identification provided by an embodiment of the present invention, as shown in (a) of fig. 3E, a flowcell ID for uniquely identifying each data flow cell may be a continuous tag value, for example, 1,2,3 … or 001,002,003 …; alternatively, as shown in FIG. 3E (B), the flowcell ID may be a cyclic label, such as 1,2,3, or A, B, C, etc. In the embodiment of the present application, because the length of the identification packet is determined according to the link delay difference T, the length of each divided flowcell packet may be greater than the link delay difference T, so that it is avoided that the overhead of a receiving-end cache packet is large due to a large occupied amount of the identification packet in each data flow unit, and thus the receiving end receives two flowcells at most simultaneously, and different flowcells can be distinguished by 3 label value cycles.
And transmitting the flowcells obtained by division through different paths to form load sharing. Assuming that flowcell ID1 and flowcell ID2 are sent from the first link and the second link, respectively, and flowcell ID1 is sent before flowcell ID2, link delay difference T1-T2 between the first link and the second link is T100 μ s (microseconds), that is, the link delay of the first link is 100 μ s more than that of the second link, so that the flowcell ID2 sent later will reach the receiving end with a message of at most 100 μ s at the same time as the message of flowcell ID1, causing message misordering (blank sections may exist in flowcell ID1 and flowcell ID2, so that the length of the misordered message is smaller than the link delay difference).
In this case, when the receiving end receives flowcell ID2 and finds that a timing error occurs in the packets in flowcell ID2, all the packets in flowcell ID2 are reordered. However, only the first message in flowcell ID2 has a timing error, and reordering all messages in flowcell ID2 will result in excessive consumption of computational resources and lower data transmission efficiency. In the embodiment of the present application, a message that may generate a timing error at a receiving end in flowcell ID2 at a sending end is identified, and other messages that do not generate a timing transmission error are not identified. Then sending the data flow unit including the identification message, and the receiving end obtaining the data flow unit according to the received identification message.
Optionally, the length of the packet that may generate the timing error may be determined according to the link delay difference, that is, the length of the identification packet may be determined according to the link delay difference. The method comprises the step of determining the length of the identification message according to the maximum time delay difference among a plurality of transmission links. For example, in a close networking, the communication link includes 5 communication links, where the longest communication link delay is t3, and the shortest communication link delay is t4, so that the length of the identification packet in each flowcell may be determined by the largest link delay difference, that is, the length of the identification packet is t3-t4, it can be ensured that the length of the identification packet in each flowcell is not less than the length of the packet that may generate a timing error (that is, the length of the delay difference between two transmission links), and the possibility that reordering of all packets with timing errors cannot be completed is avoided.
Or the length of the identification packet of the second data stream unit can be determined according to the delay difference between the first transmission link and the second transmission link, wherein the first data stream unit is transmitted in the first transmission link, the second data stream unit is transmitted in the second transmission link, and the second data stream unit is sent after the first data stream unit. For example, in fig. 3B, if the sender has determined that the first link is used to send flowcell ID1 and the second link is used to send flowcell ID2, the length of the identification packet in the flowcell ID2 may be determined according to the link delay difference between the first link and the second link.
Optionally, if the front end or the end of the data stream unit does not contain a message, for example, a blank segment or an interval time period between messages, even if the front end of the data stream unit sent later and the end of the data stream unit sent earlier arrive at the receiving end at the same time, the data stream unit sent earlier will not be interfered, and the data stream unit sent later will not generate a message timing error. Therefore, if the time period during which the front end of the data stream unit sent later does not contain a message is longer than the time period during which the data stream unit overlaps the data stream unit sent earlier on the receiving end, the data stream unit may not include an identification message. For example, the time when the end point of flowcell ID3 reaches the receiving end is T1, the time when the start point of flowcell ID4 reaches the receiving end is T2, T2 is earlier than T1, and the difference T' between T2 and T1 is less than or equal to the time Ts when no packet is included in flowcell ID4, then flowcell ID4 does not include an identification packet. On the other hand, if the difference T 'between T2 and T1 is greater than the time Ts of not including a packet in flowcell ID4, then the length of the identified packet in flowcell ID4 is T' -Ts.
And after determining the length of the identification message in each flowcell, the sending end sends the flowcells including the identification messages on the corresponding links. And the receiving end receives the flowcell and processes the flowcell according to the identification message.
Specifically, when the receiving end receives flowcell ID1, flowcell ID2 is sent after flowcell ID1, and because the link delay for sending flowcell ID2 is smaller than the link delay for sending flowcell ID1, a part of the message in flowcell ID2 may reach the receiving end before the receiving end completes receiving flowcell ID1, and the sending end already identifies the part, that is, identifies the message. Then, referring to fig. 3F, fig. 3F is a schematic diagram of a receiving end processing an identification packet according to an embodiment of the present application, as shown in fig. 3F, in a process of receiving a flowcell by the receiving end, if an identification packet of a flowcell ID2 is received, in order to avoid a possible time sequence error (packet misordering) of the identification packet, the receiving end may cache the identification packet until receiving a non-identification packet, and acquire a flowcell ID2 from an initial packet of the identification packet. In the process of caching the identification message, the receiving end continues to receive the message of the flowcell ID1, because the identification message is all messages that may have a timing error, after the caching of all identification messages is completed, when a non-identification message is received, it can be determined that the flowcell ID1 has been sent or submitted, and then the receiving end can start to acquire the message of the flowcell ID2 from the initial message of the identification message of the flowcell ID2, that is, the initial message of the flowcell ID 2. Because the identification message is not obtained simultaneously with the message corresponding to the flowcell ID1, the problem of message disorder is not caused, and the receiving end does not need to reorder the messages.
It can be seen that, in this embodiment of the present application, a sending end sends a data flow unit to a receiving end, where the data flow unit includes an identification packet that may generate a timing error at the receiving end, and then the receiving end receives the data flow unit and caches the identification packet until all identification packets are cached, and then ends the caching process and obtains the data flow unit from an initial packet of the identification packet. In addition, the data stream unit is obtained from the initial message of the identification message immediately after the completion of the caching of the identification message is determined, so that the timeliness of obtaining the data stream unit can be ensured.
Optionally, a tail identifier may be added to a data flow unit that is sent first, and then it is determined to end buffering of an adjacent data flow identifier packet that is sent later according to the tail identifier, specifically referring to fig. 3G, where fig. 3G is a schematic diagram of another receiving end processing an identifier packet provided in this embodiment of the present application, as shown in fig. 3G, in a process that the receiving end receives the flowcell ID1 and the flowcell ID2, if the identifier packet of the flowcell ID2 is received, in order to avoid a timing error that may be generated by the identifier packet, the receiving end may buffer the identifier packet of the flowcell ID2 until it is determined to receive the tail identifier of the flowcell ID1, which indicates that reception of all packets in the flowcell ID1 is completed, and then obtain the flowcell ID2 from a start packet of the identifier packet. In the process of caching the identification message, the receiving end continues to receive the message of the flowcell ID1, because the identification messages are all messages that may have a timing error, after the tail identification of the flowcell ID1 is obtained, it can be determined that the flowcell ID1 has been received by the receiving end, and the receiving end can start to obtain the message of the flowcell ID2 from the start message of the identification message of the flowcell ID2, that is, the start message of the flowcell ID 2. Because the identification message is not obtained simultaneously with the message corresponding to the flowcell ID1, the problem of message disorder is not caused, and the receiving end does not need to reorder the messages. In addition, in the process of receiving the flowcell ID1 at the receiving end, there may be an extra time delay or a time delay, etc., which causes the actual receiving duration of the flowcell ID1 to be greater than the preset receiving duration, or if the length of the identification packet is determined according to the maximum link delay difference in the close architecture, the length of the identification packet may be longer than the length of the packet which may actually generate a timing error, so that by determining to acquire the tail identification of the flowcell ID1 and then acquiring the identification packet of the flowcell ID2, it can be ensured that the acquisition of the identification packet is performed after the receiving of the flowcell ID1 is completed.
In addition, the length of the message that the receiving end needs to buffer is determined according to the length of the identification message, and is determined according to the transmission delay of the data flow unit that is sent before and the transmission delay of the data flow unit that is sent after (in general, the link delay difference between the two is also the case), then in order to make the ratio of the length of the message that is buffered at the receiving end to the length of the message in the data flow unit small, the corresponding length can be increased when the receiving end divides the flowcell within the range of the processing capability of the receiving end, for example, in fig. 3B and 3C, the link delay difference T is 100 μ s, the length of the identification message is determined according to the link delay difference also is 100 μ s, and when the length of the flowcell ID1 is 500 μ s, the ratio p of the message that is buffered at the receiving end is 100 μ s/500 μ s — 100% — 20%.
It can be seen that, in this embodiment of the present application, a sending end sends a data flow unit to a receiving end, where the data flow unit includes an identification packet that may generate a timing error at the receiving end, and then the receiving end receives the data flow unit and caches the identification packet until it is determined that a tail identifier of a previously sent data flow unit is received, and the receiving end ends a caching process and obtains the data flow unit from a start packet of the identification packet. In addition, after the tail identifier of the data stream unit sent before is determined to be acquired, the data stream unit is acquired from the initial message of the identifier message, so that the correctness of acquiring the data stream unit can be ensured, and the occurrence of disorder condition is further reduced.
In an optional case, a sending end may have multiple pieces of sending data to be sent, specifically, refer to fig. 3H, where fig. 3H is a schematic diagram of the sending end corresponding to multiple pieces of sending data provided in the embodiment of the present application, as shown in fig. 3H, if the sending end sends multiple pieces of sending data simultaneously, each data stream of the sending data is divided to obtain a data stream unit and sent, and the data stream may be a natural stream or an aggregate stream. The time for the data stream units to reach the receiving end may be the same, that is, the time for the identification packets in the data stream units to reach the receiving end may be the same, so that the receiving end needs to cache the identification packets of multiple data stream units at the same time, which causes a large cache pressure at the receiving end and reduces data processing efficiency. Therefore, before transmitting a plurality of transmission data, the transmitting end may perform distributed sorting on the transmission data, and then transmit the data stream units of the transmission data according to the sorting order, so that the time for the data stream units of different transmission data to reach the receiving end is different. Optionally, after the distributed sorting, the time interval between different sending data is greater than the link delay difference T, so that the receiving end receives and buffers the next sending data after finishing the buffering process of the previous sending data. The process relieves the cache pressure possibly caused by the fact that the receiving end caches the identification messages of a plurality of data flow units at the same time, and improves the data processing efficiency.
Referring to fig. 3I, fig. 3I is a schematic diagram of a process of implementing the data transmission method based on the identification packet according to the embodiment of the present application, where as shown in fig. 3I, the sending data is input to the router 1, the router 1 includes a slicer for dividing data stream units to obtain a plurality of data stream units, where the slicer may include a counter or a timer so as to divide the sending data according to the packet duration, the packet byte length, or the packet number, and before the sending data is input to the slicer, other processing may be performed, for example, natural streams corresponding to the sending data are subjected to hash operation to obtain an aggregation stream. The sending data is divided into a plurality of data flow units by a slicer, then the data flow units are transmitted by load sharing, the data flow units comprise identification messages, when the router 2 receives the identification messages, the identification messages are cached to form a cache queue until the data flow units sent before are determined to be completely received by the router 2, or until the router 2 receives non-identification messages after the identification messages, the router 2 finishes caching, acquires the identification messages from the cache queue, and normally receives the non-identification messages after the identification messages. And then the router 2 sends the acquired data stream unit to the next router according to the receiving sequence, or uses the data corresponding to the data stream unit for service execution. In the router 1, if there are a plurality of sending data to be sent, the data flow units of different sending data can be sorted dispersedly, so that the time for the identification messages in the data flow units of different sending data to reach the router 2 is different, and the problem of low data processing efficiency caused by the fact that the router 2 needs to cache the identification messages in the data flow units at the same time is avoided.
Fig. 4 is a communication apparatus 400 provided in an embodiment of the present application, which may be used to perform corresponding operations or steps corresponding to the transmitting end in the embodiments of fig. 3A to 3I, where the communication apparatus may be a router or may be a component (e.g., a chip or a circuit) configured in the router. In one possible implementation, as shown in fig. 4, the communication apparatus 400 includes a sending unit 401 and a processing unit 402.
The processing unit 402 is configured to generate a plurality of data stream units according to transmission data, where a data stream unit includes one or more identification messages, and the identification messages are used to mark a message in the data stream unit that prevents a transmission timing error from being generated at a receiving end;
the sending unit 401 is configured to send the multiple data stream units through load sharing.
Optionally, the communication device receiving unit 403 is configured to obtain a time delay difference between multiple transmission links corresponding to the multiple data stream units;
the processing unit 402 is further configured to: and determining the length of the identification message according to the time delay difference, wherein the transmission link is a transmission link corresponding to the data stream unit which is transmitted by load sharing.
Optionally, the processing unit 402 is specifically configured to: and determining the length of the identification message according to the maximum time delay difference among the plurality of transmission links.
Optionally, the processing unit 402 is specifically configured to: determining a first transmission link and a second transmission link corresponding to adjacent data stream units in the plurality of data stream units, wherein the adjacent data stream units include a first data stream unit and a second data stream unit, and the second data stream unit is a data stream unit sent after the first data stream unit;
and determining the length of the identification message of the second data stream unit according to the time delay difference between the first transmission link and the second transmission link.
Optionally, the processing unit 402 is further configured to: and performing scattered sequencing on data stream units corresponding to a plurality of sending data respectively to ensure that the time for the identification messages of the data stream units to reach a receiving end is different.
Alternatively, the receiving unit 403 and the transmitting unit 401 may be interface circuits or transceivers. The receiving unit 403 and the transmitting unit 401 may be independent units, or may be integrated into a transceiver unit (not shown), and the transceiver unit may implement the functions of the receiving unit 403 and the transmitting unit 401.
The processing unit 402 may be a processor, a chip, an encoder, a coding circuit, or other integrated circuits that can implement the method of the present application.
Since the specific method and embodiment have been described above, the communication apparatus 400 is used to execute the data transmission method corresponding to the transmitting end, and reference may be made to the description of relevant parts of the corresponding embodiment, which is not described herein again.
Optionally, the communication device 400 may further include a storage unit (not shown in the figure), which may be used for storing data and/or signaling, and the storage unit may be coupled to the processing unit 402, and may also be coupled to the receiving unit 403 or the sending unit 401. For example, the processing unit 402 may be configured to read data and/or signaling in the storage unit, so that the communication method in the foregoing method embodiment is performed. The storage unit may be a memory.
Fig. 5 is a communication device 500 according to an embodiment of the present application, which may be used to perform corresponding operations or steps corresponding to the receiving end in the embodiments of fig. 3A to 3I, where the communication device may be a router or may be a component (e.g., a chip or a circuit) configured in the router. In one possible implementation, as shown in fig. 5, the communication device 500 includes a receiving unit 501 and a processing unit 502.
A receiving unit 501, configured to receive multiple data stream units, where each data stream unit in the multiple data stream units includes one or more identification messages, and the identification messages are used to mark a message in the data stream unit, where the message prevents a transmission timing error from being generated at a receiving end;
the processing unit 502 is configured to obtain, according to the identification packet, transmission data of a transmitting end, where the transmission data includes multiple data stream units with correct time sequence.
Optionally, the processing unit 502 is specifically configured to: when an identification message of a third data stream unit in the plurality of data stream units is received, caching the identification message; and when receiving the non-identification message of the third data flow unit, acquiring the third data flow unit from the initial message of the identification message.
Optionally, the processing unit 502 is specifically configured to: when receiving an identification message of a fourth data stream unit in the plurality of data stream units, caching the identification message; when receiving a tail identification packet of a fifth data stream unit of the multiple data stream units, acquiring the fourth data stream unit from a start packet of the identification packet of the fourth data stream unit, where the fourth data stream unit is an adjacent data stream unit sent before the fourth data stream unit.
Optionally, the communication device 500 may further include a sending unit 503, and the receiving unit 501 and the sending unit 503 may be an interface circuit or a transceiver. The receiving unit 501 and the sending unit 503 may be independent units, or may be integrated into a transceiver unit (not shown), and the transceiver unit may implement the functions of the receiving unit 501 and the sending unit 503.
Alternatively, the processing unit 502 may be a processor, a chip, an encoder, a coding circuit, or other integrated circuits that can implement the method of the present application.
Since the specific method and embodiment have been described above, the communication apparatus 500 is used to perform a data transmission method corresponding to a receiving end, and reference may be made to the description of relevant portions of the corresponding embodiment, which is not repeated herein.
Optionally, the communication device 500 may further include a storage unit (not shown in the figure), which may be used for storing data and/or signaling, and the storage unit may be coupled to the processing unit 502, and may also be coupled to the receiving unit 501, the sending unit 503, or the sending unit 503. For example, the processing unit 502 may be configured to read data and/or signaling in the storage unit, so that the communication method in the foregoing method embodiment is performed.
As shown in fig. 6, fig. 6 is a schematic structural diagram of a communication apparatus in an embodiment of the present application. The structure of the communication apparatus 400 or the communication apparatus 500 may refer to the structure shown in fig. 6. The communication apparatus 600 includes: a processor 111 and a transceiver 112, the processor 111 and the transceiver 112 being electrically coupled;
the processor 111 is configured to execute part or all of the computer program instructions, and when the part or all of the computer program instructions are executed, the apparatus is enabled to execute the method according to any of the above embodiments.
The transceiver 112, which is used for communicating with other devices; for example, transmitting multiple data stream units, or receiving multiple data stream units, through load sharing.
Optionally, a Memory 113 is included for storing the computer program instructions, optionally, the Memory 113(Memory #1) is located within the apparatus, the Memory 113(Memory #2) is integrated with the processor 111, or the Memory 113(Memory #3) is located outside the apparatus.
It should be understood that the communication device 600 shown in fig. 6 may be a chip or a circuit. Such as a chip or circuit that may be provided in or within the router. The transceiver 112 may also be a communication interface. The transceiver includes a receiver and a transmitter. Further, the communication device 600 may also include a bus system.
The processor 111, the memory 113, and the transceiver 112 are connected via a bus system, and the processor 111 is configured to execute instructions stored in the memory 113 to control the transceiver to receive and transmit signals, so as to complete steps of a receiving end or a transmitting end in the implementation method related to the present application. The memory 113 may be integrated in the processor 111 or may be provided separately from the processor 111.
As an implementation manner, the function of the transceiver 112 may be considered to be implemented by a transceiver circuit or a transceiver dedicated chip. The processor 111 may be considered to be implemented by a dedicated processing chip, processing circuitry, a processor, or a general purpose chip. The processor may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP. The processor may further include a hardware chip or other general purpose processor. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The aforementioned PLDs may be Complex Programmable Logic Devices (CPLDs), field-programmable gate arrays (FPGAs), General Array Logic (GAL) and other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., or any combination thereof. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory referred to in the embodiments of the application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DR RAM). It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
An embodiment of the present application provides a computer storage medium, which stores a computer program, where the computer program includes a program for executing the method applied to the sending end in the foregoing embodiment.
The embodiment of the present application provides a computer storage medium, which stores a computer program, where the computer program includes a program for executing the method applied to a receiving end in the foregoing embodiment.
Embodiments of the present application provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the method applied to the transmitting end in the foregoing embodiments.
Embodiments of the present application provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the method applied to a receiving end in the foregoing embodiments.
The embodiment of the present application provides a communication system, which includes a communication device 400 and a communication device 500, which are respectively configured to execute corresponding operations or steps of a transmitting end and a receiving end in the above specific embodiments of fig. 3A to 3I.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and implementation constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or an access network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A data transmission method based on an identification message is characterized in that the method comprises the following steps:
generating a plurality of data flow units according to sending data, wherein each data flow unit comprises one or more identification messages, and the identification messages are used for marking the messages for preventing transmission time sequence errors from being generated at a receiving end in the data flow units;
and transmitting the plurality of data stream units through load sharing.
2. The method of claim 1, wherein the start packet of the identification packet is a first transmission packet of the data flow unit.
3. The method of claim 1 or 2, wherein the plurality of data stream units are generated by dividing the transmission data according to a first data stream unit length.
4. The method according to claim 1 or 2, wherein the plurality of data stream units are generated by dividing a first data streamlet according to a first data stream unit length, and the first data streamlet is a data streamlet with a first preset length or longer in the transmission data.
5. The method according to claim 3 or 4, wherein the first data flow unit length comprises a packet duration, a packet byte length or a packet number of the first data flow unit.
6. The method according to any one of claims 1-5, further comprising:
acquiring time delay differences among a plurality of transmission links corresponding to the plurality of data stream units;
and determining the length of the identification message according to the time delay difference, wherein the transmission link is a transmission link corresponding to the data stream unit which is transmitted by load sharing.
7. The method of claim 6, wherein the determining the length of the identification packet according to the delay difference comprises:
and determining the length of the identification message according to the maximum time delay difference among the plurality of transmission links.
8. The method of claim 6, wherein the determining the length of the identification packet according to the delay difference comprises:
determining a first transmission link and a second transmission link corresponding to adjacent data stream units in the plurality of data stream units, wherein the adjacent data stream units include a first data stream unit and a second data stream unit, and the second data stream unit is a data stream unit sent after the first data stream unit;
and determining the length of the identification message of the second data stream unit according to the time delay difference between the first transmission link and the second transmission link.
9. The method according to any of claims 1-8, characterized in that the data stream units corresponding to a plurality of sending data respectively are sorted dispersedly, so that the time of arrival of the identification packets of the data stream units at the receiving end is different.
10. A data transmission method based on an identification message is characterized in that the method comprises the following steps:
receiving a plurality of data stream units, wherein each data stream unit in the plurality of data stream units comprises one or more identification messages, and the identification messages are used for marking the messages for preventing transmission sequence errors from being generated at a receiving end in the data stream units;
and acquiring sending data of a sending end according to the identification message, wherein the sending data comprises a plurality of data flow units with correct time sequence.
11. The method of claim 10, wherein the start packet of the identification packet is a first transmission packet of the data flow unit.
12. The method according to claim 10 or 11, wherein the obtaining of the transmission data of the transmitting end according to the message identifier includes:
when an identification message of a third data stream unit in the plurality of data stream units is received, caching the identification message;
and when receiving the non-identification message of the third data flow unit, acquiring the third data flow unit from the initial message of the identification message.
13. The method according to claim 10 or 11, wherein the obtaining of the transmission data of the transmitting end according to the message identifier includes:
when receiving an identification message of a fourth data stream unit in the plurality of data stream units, caching the identification message;
when receiving a tail identification packet of a fifth data stream unit of the multiple data stream units, acquiring the fourth data stream unit from a start packet of the identification packet of the fourth data stream unit, where the fourth data stream unit is an adjacent data stream unit sent before the fourth data stream unit.
14. A communication apparatus, characterized in that the apparatus comprises a processing unit and a transmitting unit, wherein,
the processing unit is used for generating a plurality of data flow units according to the sending data, the data flow units comprise one or more identification messages, and the identification messages are used for marking the messages which prevent transmission sequence errors from being generated at a receiving end in the data flow units;
the sending unit is configured to send the multiple data stream units by load sharing.
15. The apparatus of claim 14, wherein the processing unit and the transmitting unit are further configured to perform the method of any of claims 2-9.
16. A communication apparatus, characterized in that the apparatus comprises a receiving unit and a processing unit, wherein,
the receiving unit is configured to receive a plurality of data stream units, where each data stream unit in the plurality of data stream units includes one or more identification messages, and the identification messages are used to mark a message in the data stream unit that prevents a transmission timing error from being generated at a receiving end;
and the processing unit is used for acquiring the sending data of the sending end according to the identification message, wherein the sending data comprises a plurality of data flow units with correct time sequence.
17. The apparatus according to claim 16, wherein the receiving unit and the processing unit are further configured to perform the method according to any of claims 11-13.
18. An apparatus for communication, the apparatus comprising at least one processor coupled with at least one memory:
the at least one processor configured to execute computer programs or instructions stored in the at least one memory to cause the apparatus to perform the method of any of claims 1-9; or cause the apparatus to perform a method as claimed in any of claims 10-13.
19. A readable storage medium storing instructions that, when executed, cause a method as claimed in any one of claims 1-9 to be implemented, or cause a method as claimed in any one of claims 10-13 to be implemented.
20. A communication system, characterized in that the system comprises a communication device according to claims 14-15 and a communication device according to claims 16-17.
CN202010427447.2A 2020-05-19 2020-05-19 Data transmission method and device based on identification message Pending CN113691459A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010427447.2A CN113691459A (en) 2020-05-19 2020-05-19 Data transmission method and device based on identification message

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010427447.2A CN113691459A (en) 2020-05-19 2020-05-19 Data transmission method and device based on identification message

Publications (1)

Publication Number Publication Date
CN113691459A true CN113691459A (en) 2021-11-23

Family

ID=78576025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010427447.2A Pending CN113691459A (en) 2020-05-19 2020-05-19 Data transmission method and device based on identification message

Country Status (1)

Country Link
CN (1) CN113691459A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023202294A1 (en) * 2022-04-18 2023-10-26 华为技术有限公司 Data stream order-preserving method, data exchange device, and network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023202294A1 (en) * 2022-04-18 2023-10-26 华为技术有限公司 Data stream order-preserving method, data exchange device, and network

Similar Documents

Publication Publication Date Title
CN108462646B (en) Message processing method and device
CN108494676B (en) Data transmission method, data transmission device, data transceiving equipment, data transceiving system and storage medium
CN108243116B (en) Flow control method and switching equipment
US9325637B2 (en) System for performing distributed data cut-through
WO2020236273A1 (en) System and method for facilitating hybrid message matching in a network interface controller (nic)
US8767747B2 (en) Method for transferring data packets in a communication network and switching device
WO2017206763A1 (en) Terminal apparatus, data processing method, and data storage medium
US20050135355A1 (en) Switching device utilizing internal priority assignments
CN112448896B (en) Method and device for determining transmission period in deterministic network
US9106593B2 (en) Multicast flow reordering scheme
CN112242956B (en) Flow rate control method and device
US8514700B2 (en) MLPPP occupancy based round robin
EP3574616B1 (en) Processing real-time multipoint-to-point traffic
US8885673B2 (en) Interleaving data packets in a packet-based communication system
CN111740922B (en) Data transmission method, device, electronic equipment and medium
CN113691459A (en) Data transmission method and device based on identification message
CN109995608B (en) Network rate calculation method and device
CN113612698A (en) Data packet sending method and device
CN110365580B (en) Service quality scheduling method and device, electronic equipment and computer readable storage medium
US20030091067A1 (en) Computing system and method to select data packet
CN110601996B (en) Looped network anti-starvation flow control method adopting token bottom-preserving distributed greedy algorithm
CN110336759B (en) RDMA (remote direct memory Access) -based protocol message forwarding method and device
WO2022147762A1 (en) Data packet sequencing method and apparatus
EP3968545A1 (en) Fault protection method, device and system for optical network
WO2023123075A1 (en) Data exchange control method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination