WO2017211096A1 - 传输数据流的方法与设备 - Google Patents
传输数据流的方法与设备 Download PDFInfo
- Publication number
- WO2017211096A1 WO2017211096A1 PCT/CN2017/074329 CN2017074329W WO2017211096A1 WO 2017211096 A1 WO2017211096 A1 WO 2017211096A1 CN 2017074329 W CN2017074329 W CN 2017074329W WO 2017211096 A1 WO2017211096 A1 WO 2017211096A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data stream
- packet
- node
- cache queue
- indication message
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/26—Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
- H04L47/266—Stopping or restarting the source, e.g. X-on or X-off
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
- H04L45/742—Route cache; Operation thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/24—Multipath
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/11—Identifying congestion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/20—Traffic policing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2483—Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/30—Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/32—Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/52—Queue scheduling by attributing bandwidth to queues
Definitions
- Embodiments of the present invention relate to the field of communications, and, more particularly, to a method and apparatus for transmitting a data stream.
- a multi-path transmission system refers to a communication system in which multiple paths exist.
- a data center network (DCN) is a typical multi-path transmission system, which connects a large number of servers and is organized into multiple transmission paths.
- the network is a new network that integrates computing, storage and networking.
- Data streams of different kinds of services are transmitted in a multi-path transmission system, and some services, such as commercial applications or financial transactions (commonly such as high-frequency transactions), have an urgent need for reliable transmission with low latency. Therefore, low-latency reliable transmission of data streams in a multi-path transmission system is critical.
- Network congestion is an important factor affecting reliable transmission with low latency, because network congestion can cause packet loss, which affects transmission reliability, and network congestion also increases transmission delay.
- the prior art is generally based on a retransmission mechanism and a congestion avoidance technique to ensure low-latency reliable transmission of data streams.
- the retransmission mechanism refers to requesting the retransmission of the lost data packet to the sender of the data stream after the packet loss occurs to ensure the transmission reliability. Specifically, the packet loss situation on multiple paths is detected, and then feedback is sent to the sending end of the data stream to trigger the transmitting end to perform retransmission.
- the main idea of congestion avoidance technology is to select the path with the least congestion in multiple paths to transmit data streams to reduce the transmission delay. Specifically, network congestion on multiple paths is detected, and then feedback is sent to the sending end of the data stream to trigger the sending end to perform corresponding scheduling.
- the present application provides a method and device for transmitting a data stream, which can ensure reliable transmission of data streams and reduce network congestion without using a closed loop control loop, and can reduce implementation complexity compared to the prior art.
- a method for transmitting a data stream transmitting the data stream between an origin node and a destination node by an intermediate node, the data stream comprising a first data stream encoded in a fountain code
- the method includes: a first intermediate node receiving an encoded packet sent by the source node or at least one second intermediate node, where the encoded packet is a packet obtained by encoding a raw packet of the first data stream by using a fountain code technology
- the first intermediate node discards the encoded packet if the occupancy of the first cache queue exceeds a threshold, and the first cache queue is the first intermediate node that allocates the first data stream.
- Cache queue, the threshold value indicating the maximum occupancy allowed by the first cache queue.
- the first data stream is transmitted in a transmission path, specifically, the first data stream Accessing from the source node of the transmission path, through the transmission of one or more intermediate nodes in the transmission path, to the destination node of the transmission path.
- the first intermediate node in the technical solution of the present application may indicate one intermediate node or multiple intermediate nodes in the transmission path of the first data stream.
- the source node on the transmission path of the first data stream is regarded as a transmitting device
- the intermediate node is regarded as a forwarding device
- the destination node is regarded as a receiving device
- the first intermediate in the technical solution of the present application A node can also be called a forwarding device.
- the first intermediate node may be, for example, a network device with a data forwarding function, such as a switch or a router.
- the technical solution of the present application if the occupancy rate of the first buffer queue allocated for the first data stream exceeds a threshold, the currently received encoded packet of the first data stream is discarded, and the threshold is first.
- the maximum occupancy allowed by the cache queue can reduce network congestion to a certain extent by actively dropping packets on the first data stream.
- the coding format of the first data stream is a fountain code, and the data transmission based on the fountain code can ensure the reliability of data transmission without retransmission. Therefore, the active packet loss of the first data stream does not cause the first The throughput loss of the data stream can still guarantee the reliable transmission of the first data stream.
- the technical solution of the present application does not use a closed loop control loop, which avoids the feedback control existing in the existing method to additionally occupy network bandwidth resources. Therefore, the technical solution of the present application can ensure reliable transmission of data streams and reduce network congestion without using a closed loop control loop, and can reduce implementation complexity compared to the prior art.
- the coded packet carries an identifier for indicating the first data stream
- the first intermediate node may determine, according to the identifier carried by the coded packet, that the coded packet belongs to the first data stream, Then, it is determined whether to discard or cache the encoded packet by determining whether the occupancy of the first buffer queue exceeds a threshold.
- the method for transmitting a data stream is applied to a multi-path transmission system, where the multi-path transmission system includes multiple transmission paths, and the multiple A transmission path is used to transmit the data stream, and the first intermediate node represents each intermediate node on each of the plurality of transmission paths.
- the transmission efficiency of the first data stream can be effectively improved.
- the first data queue is allocated to the first data stream, and if the occupancy rate of the first buffer queue exceeds the threshold, the first received current one is actively discarded.
- the encoded packet of the data stream can effectively reduce network congestion of each of the plurality of transmission paths, thereby reducing the transmission delay of the first data stream.
- the encoded form of the first data stream is a fountain code, which can ensure reliable transmission of the first data stream.
- the technical solution of the present application reliable transmission of the data stream can be ensured, and network congestion can be effectively reduced to reduce the transmission delay of the data stream, thereby being capable of satisfying low-latency reliable transmission of the data stream in the multi-path transmission system.
- the technical solution of the present application does not require a closed loop control loop, which reduces the implementation complexity compared to the prior art.
- the method further includes: the usage of the first cache queue does not exceed In the case of the threshold value, the first intermediate node stores the encoded packet in the first buffer queue; the first intermediate node sends the cached in the first cache queue to the destination node Encoding grouping.
- the method further includes: the first intermediate node receiving the indication message The indication message is sent by the destination node, where all the original packets of the first data stream are decoded based on the received encoded packet, the indication message is used to indicate that the source node stops sending the first a data stream, the indication message size is 1 bit; the first intermediate node sends the indication message to the source node.
- the 1-bit indication message is used to notify the source node to stop sending the first data stream, thereby preventing the source node from sending to the network.
- the unnecessary data is fed back to the source node by using the 1-bit indication message.
- the ACK message feedback can effectively reduce the occupation of the network bandwidth.
- the indication message is further used to indicate that the first data stream is discarded
- the method further includes: The first intermediate node discards the encoded packet of the first data stream buffered in the first buffer queue according to the indication message.
- the coded packet of the first data stream existing in the network is actively discarded, thereby avoiding invalid transmission, which is beneficial to reducing network congestion.
- the first data stream Each of the service data streams in the multi-path transmission system can be indicated.
- the technical solution of the present application can meet the requirement of low-latency reliable transmission of all data streams transmitted in the multi-path transmission system.
- the data stream further includes The second data stream whose encoded form is not a fountain code, the method further comprising: the first intermediate node receiving a packet of the second data stream; the first intermediate node storing a packet of the second data stream Going to the second cache queue, the second cache queue is a cache queue allocated by the first intermediate node to the second data stream; the first intermediate node sends the cache in the second cache queue to the destination node The grouping of the second data stream.
- a variety of different types of services can be deployed in a multi-path transmission system, which correspond to a wide variety of data streams transmitted in a multi-path transmission system.
- some services such as commercial applications or financial transactions (commonly, such as high-frequency transactions), have strict requirements for end-to-end transmission delays, and the data flows corresponding to these services.
- the data streams transmitted in the multi-path transmission system are divided into high-priority streams (such as delay-sensitive streams) and low-priority streams.
- High priority flows have an urgent need for reliable transmission with low latency.
- the solution to the above problem has a traffic prioritization technique.
- the main idea is that the forwarding device always preferentially processes the high-priority flow in the shared cache queue to ensure the transmission performance of the high-priority flow.
- this prioritization of flow prioritization techniques can result in starvation of low priority flows.
- a high-priority flow (for example, a delay-sensitive flow) in a multi-path transmission system may be operated as the first data flow, and a low-priority flow may be used as a The second data stream operates.
- the first forwarding device allocates a first cache queue to the first data stream, and allocates a second cache queue to the second data stream, where the first cache queue is only used to cache the first data stream.
- the second cache queue is configured to cache the second data stream.
- the first forwarding device separately caches the first data stream and the second data stream, and implements the first data stream by performing a fountain code operation on the first data stream and an active packet loss operation.
- the low-latency reliable transmission can largely avoid the impact on the second data stream, so that the problem of starvation of the low-priority flow in the existing traffic prioritization technology should not occur. Therefore, the technical solution of the present application can avoid low priority on the basis of implementing low-latency reliable transmission of a high-priority flow (corresponding to the first data flow), compared to the existing traffic prioritization technology.
- the flow (corresponding to the second data stream) is starved to ensure fairness between data streams.
- a second aspect provides a method of receiving a data stream, the method transmitting the data stream between a source node and a destination node by an intermediate node, the method comprising: the destination node receiving the source node through an intermediate node An encoded packet of the transmitted first data stream, the encoded packet being a packet obtained by encoding a raw packet of the first data stream by using a fountain code technique; the destination node decoding the encoded packet to obtain a corresponding original a packet, in the case of decoding all the original packets of the first data stream, the destination node sends an indication message to the source node, where the indication message is used to indicate that the source node stops sending the first data. Flow, the size of the indication message is 1 bit.
- the 1-bit indication message is used to notify the source node to stop sending the first data stream, thereby preventing the source node from sending to the network.
- the unnecessary data is fed back to the source node by using the 1-bit indication message.
- the ACK message feedback can effectively reduce the occupation of the network bandwidth.
- the method further includes: the target node, within a preset duration after the sending the indication message, if the first The encoded packet of the data stream continues to send the indication message to the source node until the encoded packet of the first data stream is no longer received within the preset duration after the indication message is sent.
- the technical solution of the present application can ensure that the indication message successfully reaches the source node, so that the source node stops transmitting the encoded packet of the first data stream.
- the method further includes: the pre- Within the duration, if the encoded packet of the first data stream is received again, the currently received encoded packet is discarded.
- the destination node sends the indication message to the source node based on a User Datagram Protocol (UDP).
- UDP User Datagram Protocol
- a third aspect provides a network device, configured to transmit a data stream between a source node and a destination node, the data stream including a first data stream encoded in a fountain code, the network device being configured to Performing the method of any of the above first aspect or any of the possible implementations of the first aspect.
- the network device may comprise means for performing the method of the first aspect or any of the possible implementations of the first aspect.
- the network device corresponds to the first intermediate node in the method in the first aspect or any possible implementation manner of the first aspect.
- a fourth aspect provides a network device for transmitting a data stream between a source node and a destination node, the data stream comprising a first data stream encoded in the form of a fountain code, the network device comprising a memory And a processor for storing instructions for executing the instructions stored by the memory, and performing the instructions stored in the memory causes the processor to perform the first aspect or any of the possible implementations of the first aspect The method in the way.
- a fifth aspect provides a multi-path transmission system, where the multi-path transmission system includes a transmitting device, a receiving device, and a network device, where the transmitting device and the receiving device have multiple paths, and the network device is in multiple paths.
- a forwarding device in the network device corresponding to the third aspect or the fourth aspect the network device further corresponding to the first aspect or the method in any one of the possible implementation manners of the first aspect
- An intermediate node, the transmitting device corresponding to the source node in the method of the first aspect or any possible implementation of the first aspect, the receiving device corresponding to any of the first aspect or the first aspect The destination node in the method in the implementation.
- the first data stream may be a delay-sensitive stream.
- the first data stream is a short stream with strict requirements on transmission delay in a Data Center Network (DCN).
- DCN Data Center Network
- the usage manner of the first cache queue is in any one of the following forms: a space occupation size, a space occupation percentage, and a space occupation ratio.
- the threshold value indicates a maximum occupancy rate allowed by the first cache queue.
- the overall cache space of the first forwarding device is 10M
- the storage space size of the first cache queue is 5M
- the storage space of the second cache queue is 5M. It is assumed that the threshold of the first cache queue is set to be 4M. Assume that the threshold of the first cache queue is set to 80%. Assume that the threshold of the first cache queue is set to 0.8.
- the maximum allowed occupancy rate can reduce network congestion to some extent by actively dropping the first data stream.
- the coding format of the first data stream is a fountain code, and the data transmission based on the fountain code can ensure the reliability of data transmission without retransmission. Therefore, the active packet loss of the first data stream does not cause the first The throughput loss of the data stream can still guarantee the reliable transmission of the first data stream.
- the technical solution of the present application does not use a closed loop control loop, which avoids the feedback control existing in the existing method to additionally occupy network bandwidth resources. Therefore, the technical solution of the present application can ensure reliable transmission of data streams and reduce network congestion without using a closed loop control loop, and can reduce implementation complexity compared to the prior art.
- FIG. 1 is a schematic diagram of an application scenario of an embodiment of the present invention.
- FIG. 2 is a schematic diagram showing a method of transmitting a data stream according to an embodiment of the present invention.
- FIG. 3 shows a schematic flow chart of a method for transmitting a data stream according to an embodiment of the present invention.
- FIG. 4 shows a schematic block diagram of a network device according to an embodiment of the present invention.
- FIG. 5 shows another schematic block diagram of a network device according to an embodiment of the present invention.
- FIG. 6 shows a schematic block diagram of a multipath transmission system provided in accordance with an embodiment of the present invention.
- FIG. 1 shows a specific application scenario of the embodiment of the present invention: a leaf-spine architecture of a data center network (DCN).
- the Leaf-Spine architecture consists of servers and multi-level switches/routers (such as the core layer, aggregation layer, and edge layer switches/routers shown in Figure 1). Take the switch as an example.
- the Leaf-Spine architecture includes core switches, aggregation switches, edge switches, and servers.
- the core switch is connected to the aggregation switch, and the core switches are connected to each other.
- the aggregation switch is connected to both the core switch and the edge switch.
- the different aggregation switches are also connected to each other.
- the aggregation switch is called a spine switch.
- the edge switch is connected.
- the aggregation switch is also connected to the server, and the edge switch is called a leaf switch.
- a service By connecting to the edge switch, the device can access the network so that it can establish communication connections with other servers in the network.
- FIG. 1 there are multiple transmission paths between any two different servers in the Leaf-Spine architecture, so that more path options can be provided, and traffic can be dispersed among multiple transmission paths.
- the server in Figure 1 may also be referred to as a host.
- the DCN there are east-west traffic and north-south traffic. Among them, there is mainly east-west traffic within the DCN.
- the north-south traffic is mainly between different DCNs, and east-west traffic dominates, accounting for about 67% of the total DCN traffic.
- the east-west traffic is further divided into a short stream and a long stream, wherein the short stream generally refers to a flow with a length of several tens of KB.
- Short-flow has strict requirements on the end-to-end transmission delay. Taking high-frequency transactions as an example, the Round Trip Time (RTT) of a high-frequency transaction message needs to be completed within 30 milliseconds. If it exceeds this range. The high frequency transaction message is invalidated, resulting in loss of transaction. Therefore, the low-rate and reliable transmission of short-flow is an urgent technical problem in the DCN.
- RTT Round Trip Time
- the prior art is generally based on a retransmission mechanism and a congestion avoidance technique to ensure low-latency reliable transmission of data streams. Since the existing retransmission mechanism and the congestion avoidance technology both need to use a closed loop control loop to monitor the congestion of each path, the implementation is complicated, and the feedback control of the closed loop control loop additionally takes up network bandwidth resources.
- the embodiment of the present invention provides a method and a device for transmitting a data stream, which can ensure reliable transmission of data streams and reduce network congestion without using a closed loop control loop, compared to the prior art. Can reduce implementation complexity.
- FIG. 2 shows a schematic diagram of a method of transmitting a data stream according to an embodiment of the present invention.
- the source node and the destination node such as path 1, path 2, ..., path n shown in FIG. 2
- each transmission path includes at least one intermediate node, such as Figure 2 illustrates the first, second, and third intermediate nodes in path 1.
- the source node corresponds to one server in the architecture shown in FIG. 1
- the destination node corresponds to another server in the architecture shown in FIG. 1, and there are multiple transmission paths between the two servers.
- the intermediate nodes in the n paths shown in FIG. 2 may be switches or routers or servers, such as corresponding to some of the switches or routers in the architecture shown in FIG.
- the source node transmits the data block A to the destination node by using the n transmission paths, and the coding form of the data block A is a fountain code. Specifically, the source node divides the data block A into k packets, and then encodes the k packets by using a fountain code technique to obtain encoded data.
- the packet obtained by dividing the data block is recorded as an original packet, and the encoded data obtained by encoding the original packet by using the fountain code technique is recorded as an encoded packet (Encoded). Packet). As shown in FIG.
- the partitioned data block A obtains k original packets, and the k original packets are encoded by the fountain code technique to obtain a plurality of encoded packets (only two encoded packets are illustrated in FIG. 2 due to the limitation of drawing). For example, encoding the original packets 1 and 2 to obtain the first encoded packet shown in FIG. 2, encoding the original packet 2 to obtain the first encoded packet shown in FIG. 2, and encoding the original packets 1 and k. The third encoded packet shown in Figure 2, and so on.
- the source node transmits the encoded packet obtained by coding to the destination node by using n paths.
- the destination node receives the encoded packet sent by the source node through the n paths, and then uses the fountain code technology to decode the received encoded packet to obtain a corresponding original packet.
- the destination node Obtaining data block A that is, the transmission of data block A from the source node to the destination node is completed.
- the data stream corresponding to the data block A shown in FIG. 2 is recorded as the first data stream, and the intermediate nodes in the n transmission paths are used to transmit the first data stream, in other words, for forwarding the data block A.
- the packet is encoded, and finally the data block A is transmitted to the destination node.
- the first intermediate node is the first The data stream allocates a first cache queue, as shown in the enlarged view of path 1 in FIG. 2, the first cache queue is dedicated to buffering the first data stream.
- the first intermediate node receives the coded packet sent by the second intermediate node, and the first intermediate node determines that the coded packet belongs to the first data stream, and then determines whether the occupancy rate of the first cache queue exceeds a threshold,
- the threshold is the maximum occupancy allowed by the first cache queue. And if the occupancy rate of the first cache queue exceeds a threshold, discarding the encoded packet; otherwise, buffering the encoded packet into the first cache queue, and subsequently sending the cache in the first cache queue to the third intermediate node. Encoding grouping.
- the third intermediate node then forwards the received encoded packet to the next hop intermediate node, and so on, until the encoded packet is sent to the destination node.
- the source node can be regarded as a transmitting device for first transmitting the first data stream
- the intermediate node can be regarded as a forwarding device for forwarding the first data stream
- the destination node can be regarded as being used for finally receiving the first data stream and no longer Forwarded receiving device.
- the intermediate node in the embodiment of the present invention may be, for example, a network device having a data forwarding function, such as a switch or a router.
- the source node involved in the embodiment of the present invention may be a server or a terminal device (such as a personal computer, a handheld terminal, etc.), and the destination node may be a server or a terminal device (such as a personal computer, a handheld terminal, etc.).
- the intermediate node may be a server or a switch or a router or a terminal device having a forwarding function (such as a personal computer, a handheld terminal, etc.).
- fountain code Frountain Codes
- the so-called fountain code refers to the random encoding at the transmitting end, and any number of encoded packets are generated by k original packets, and the transmitting end continuously transmits the encoded packet without knowing whether the encoded packets are successfully received.
- the receiving end receives any subset of k(1+e) coded packets, all original packets can be successfully recovered by decoding with high probability (related to e).
- Fountain code can be divided into random linear fountain code, LT (Luby Transform) code and Raptor code.
- the LT code is the first fountain code scheme with practical performance.
- the encoding and decoding method of the LT code is: randomly select d original packets from k original packets at a certain degree (d) distribution at the transmitting end, and then select the selected The d original packets are XORed to obtain an encoded packet, and the encoded packet is transmitted to the receiving end. After receiving the n (n is greater than k) coded packets, the receiving end can recover the k original packets with a probability of not less than (1-e), and e is the unrecoverable probability of the encoded packets at the receiving end. e decreases as n increases.
- n tends to infinity (ie, the receiving end receives an infinite number of encoded packets), e tends to zero.
- a reasonable degree distribution is the key to the performance of the LT code.
- the compiler code theory analysis of the LT code when the input data amount is above 104, 5% redundant information is required to ensure a higher decoding success rate.
- the source node randomly distributes all the original packets of the data block into the respective coded packets according to the selected coding algorithm, and continuously "jets" the coded packets to the destination node, like a fountain, without knowing whether the coded packets are successfully received by the destination node; As long as enough coded packets are received (more than the original number of packets), all of the original packets can be decoded, thereby restoring the data blocks.
- the experimental data shows that when the number of encoded packets received by the destination node is 1.704 times (average value) of the number of original packets, the destination node can decode all the original packets. It should be understood that this multiple is related to k, d, and the degree of network path congestion.
- the fountain code loses packets during transmission, there is no need to feed back the reception status to the source node, that is, it is not necessary to notify the source node to retransmit the discarded packet. It should be noted that, for the fountain code, when all the original packets are decoded by the destination node, the receiving state needs to be fed back to the source node to indicate that the source node stops transmitting the encoded packet.
- the embodiment of the invention uses the fountain code technology to process the first data stream, which can effectively ensure the reliable transmission of the first data stream.
- the fountain code technology does not require a feedback channel, and only the forward link can avoid the occupation of bandwidth resources by the feedback loop existing in the traditional retransmission mechanism.
- the existing retransmission mechanism on the basis of ensuring reliable data transmission, it can also help to reduce network congestion to a certain extent.
- the occupancy rate of the first buffer queue allocated for the first data stream exceeds a threshold
- the currently received encoded packet of the first data stream is discarded, and the threshold is first.
- the maximum occupancy allowed by the cache queue can reduce network congestion to a certain extent by actively dropping packets on the first data stream.
- the coding format of the first data stream is a fountain code, and the data transmission based on the fountain code can ensure the reliability of data transmission without retransmission. Therefore, the active packet loss of the first data stream does not cause the first The throughput loss of the data stream can still guarantee the reliable transmission of the first data stream.
- the embodiment of the present invention does not use a closed loop control loop, which avoids the feedback control existing in the existing method to additionally occupy network bandwidth resources.
- the embodiment of the present invention can ensure reliable transmission of data streams and reduce network congestion without using a closed loop control loop, and can reduce implementation complexity compared to the prior art.
- the first data stream in the embodiment of the present invention may be a delay-sensitive stream, specifically, for example, a short stream with strict delay in the data center network.
- the method for transmitting a data stream is applied to a multi-path transmission system, where the multi-path transmission system includes multiple transmission paths, and the multiple transmission paths are used to transmit the data.
- the first intermediate node represents each of the plurality of transmission paths.
- all intermediate nodes included in each of the n paths have the structure and function of the first intermediate node shown in FIG. 2, that is, for each of the n paths
- the node allocates a first buffer queue to the first data stream, and actively discards the currently received encoded packet of the first data stream if the occupancy rate of the first buffer queue exceeds a threshold, which can effectively reduce
- the network of each of the plurality of transmission paths is congested, thereby reducing the transmission delay of the first data stream.
- the multi-path transmission system may include multiple source nodes and multiple destination nodes, and the correspondence between the source node and the destination node may be determined by the network topology in a specific scenario.
- the embodiment of the present invention is described by taking only one source node and one destination node shown in FIG. 2 as an example.
- the first data stream may indicate each service data flow in the multi-path transmission system.
- the data stream of each service in the multi-path transmission system is processed according to the processing manner of the first data stream, and therefore, the low-latency reliable transmission of all the data streams transmitted in the multi-path transmission system can be satisfied. demand.
- a variety of different types of services can be deployed in a multi-path transmission system, which correspond to a wide variety of data streams transmitted in a multi-path transmission system.
- some services have strict requirements on the end-to-end transmission delay.
- the data streams corresponding to these services have an urgent need for low-latency and reliable transmission.
- the data streams transmitted in the multi-path transmission system are divided into high-priority streams (such as delay-sensitive streams) and low-priority streams. High priority flows have an urgent need for reliable transmission with low latency.
- the solution to the above problem has a traffic prioritization technique, and the main idea is to always prioritize the high-priority flow in the shared cache queue to ensure the transmission performance of the high-priority flow.
- this prioritization of flow prioritization techniques can result in starvation of low priority flows.
- the first intermediate node allocates a second buffer queue for the second data stream, where the second data stream is a non-fountain code processed data stream.
- the first intermediate node receives a packet of the second data stream transmitted by the last hop network node (such as the third forwarding node shown in FIG. 2).
- the first intermediate node stores the packets of the second data stream into a second cache queue.
- the first intermediate node sends a packet of the second data stream buffered in the second buffer queue to the next hop network node (such as the second forwarding node shown in FIG. 2).
- the first intermediate node of the embodiment of the present invention the first data stream and the second data stream are not And sharing the same cache queue, the first intermediate node allocates a first cache queue for the first data stream, allocates a second cache queue for the second data stream, and caches the received packet of the first data stream to the first cache queue. And buffering the received packet of the second data stream into the second cache queue.
- the first cache queue and the second cache queue are different cache queues, but the first cache queue and the second cache queue share one physical cache space.
- the first data stream is, for example, a high-priority stream in a multi-path transmission system
- the second data stream is, for example, a low-priority stream in a multi-path transmission system.
- the first data stream is a short stream in a data center network, the other stream being a long stream in the data center network.
- the first intermediate node allocates a first cache queue for the first data stream, and allocates a second cache queue for the second data stream, where the first cache queue is only used to cache the first data stream.
- the second cache queue is used to cache the second data stream.
- the first intermediate node separately caches the first data stream and the second data stream, and then implements a fountain code operation and an active packet loss operation on the first data stream to implement a low time of the first data stream. While the reliable transmission is extended, the impact on the second data stream can be largely avoided, so that the problem of starvation of the low priority flow in the existing traffic prioritization technology should not occur.
- the embodiment of the present invention can avoid low-priority flow on the basis of implementing low-latency reliable transmission of a high-priority flow (corresponding to the first data flow) with respect to the existing traffic prioritization technology. Starvation occurs (corresponding to the second data stream), ensuring fairness between data streams.
- FIG. 3 is a schematic flowchart of a method 100 for transmitting a data stream according to an embodiment of the present invention.
- the source node in FIG. 3 corresponds to the source node in FIG. 2, and the destination node in FIG. 3 corresponds to the source node in FIG.
- the destination node, the first intermediate node in FIG. 3 corresponds to the first intermediate node in FIG. 2, and the method 100 includes:
- a data block of the first data stream to be sent by the source node (corresponding to the data block A on the source node side shown in FIG. 2) is divided to form k original packets; and then the k originals are used by using the fountain code technology.
- the packets are encoded to obtain m coded packets, m being greater than k. It should be understood that for ease of drawing and understanding, only n coded packets that are about to enter n paths are shown schematically in FIG.
- the source node marks each of the encoded packets of the first data stream with an identifier for indicating the first data stream. Specifically, the source node tags each of the m coded packets with a fixed service flow, and the intermediate node in the path can identify the first data flow according to the label.
- the source node sends, by using multiple paths (corresponding to the n paths shown in FIG. 1), an encoded packet carrying an identifier indicating the first data stream.
- the first intermediate node of the multiple paths receives the coded packet sent by the last hop network node, and determines that the coded packet belongs to the first data stream according to the identifier carried by the coded packet.
- the last hop network node may be the source node or the last hop intermediate node of the path where the first intermediate node is located.
- the last hop network node corresponds to a third intermediate node.
- the first intermediate node determines whether the occupancy rate of the first buffer queue (corresponding to the first cache queue shown in FIG. 1) allocated by the first data stream exceeds a threshold, and if yes, go to S150, and if not, go to S160.
- the first intermediate node determines that the occupancy of the first cache queue exceeds a threshold, and discards the encoded packet.
- the first intermediate node determines that the occupancy of the first cache queue does not exceed a threshold, and stores the encoded packet in the first cache queue.
- the first intermediate node sends the encoded packet in the first buffer queue to the destination node.
- the first intermediate node may directly send the encoded packet to the destination node, otherwise, the first intermediate node passes the other path on the path where the first intermediate node is located.
- the forwarding node indirectly transmits the encoded packet to the destination node.
- the destination node receives the encoded packet sent by the source node through multiple paths (corresponding to the n paths shown in FIG. 1), decodes the received encoded packet by using a fountain code decoding technology, and determines whether the first data is decoded. All original packets of the stream, such as k original packets of the data block on the source node side in Fig. 1, if yes, go to S190, and if not, go to S170.
- the target node sends, to the source node, an indication message for instructing to stop sending the first data stream, if it is determined that all original packets of the first data stream are decoded. It should be understood that after receiving the indication message, the first intermediate node sends the indication message to the source node.
- the embodiment of the present invention when the occupancy rate of the first buffer queue allocated for the first data stream exceeds a threshold, the currently received encoded packet of the first data stream is discarded, and the threshold is first.
- the maximum occupancy allowed by the cache queue can reduce network congestion to a certain extent by actively dropping packets on the first data stream.
- the coding format of the first data stream is a fountain code, and the data transmission based on the fountain code can ensure the reliability of data transmission without retransmission. Therefore, the active packet loss of the first data stream does not cause the first The throughput loss of the data stream can still guarantee the reliable transmission of the first data stream.
- the embodiment of the present invention does not use a closed loop control loop, which avoids the feedback control existing in the existing method to additionally occupy network bandwidth resources. Therefore, the embodiment of the present invention can ensure reliable transmission of data streams and reduce network congestion without using a closed loop control loop, and can reduce implementation complexity compared to the prior art.
- the action of encoding the k original packets by using the fountain code technique may be performed by an encoder on the source node side.
- the central processing unit (CPU) of the encoder may be a single core CPU, and the encoded packet may be serially output; if the CPU of the encoder is also a multi-core CPU, the encoded packet may be output in parallel.
- a Field-Programmable Gate Array (FPGA)-based network interface card (NIC) can implement parallel processing of encoding and decoding using hardware.
- FPGA Field-Programmable Gate Array
- NIC network interface card
- the encoder may be a functional module in the source node or an encoder device independent of the source node.
- the specific coding manner of the first data stream may adopt an LT (Luby Transform) coding mode, and the LT code is a practical fountain code mode.
- LT Linear Transform
- other fountain code coding methods may be used, which are not limited in this embodiment of the present invention.
- the source node uses the average allocation and polling mechanism to continuously send the encoded packets of the first data stream to the destination node by using multiple paths, where the form of polling has no strict correspondence with content or order.
- n paths between the source node and the destination node are recorded as a path list.
- the source node transmits the first encoded packet to the first path of the path list (path 1 shown in Figure 1) for transmission, and the generated second encoded packet is assigned to the second path of the path list ( The transmission is performed on the path 2) shown in FIG. 1, and the generated third encoded packet is allocated to the third path of the path list (path 3 shown in FIG. 1) for transmission, and so on.
- the nth coded packet is allocated to the nth path of the path list (path n shown in Figure 1) for transmission, at which time the bottom of the path list has been reached.
- the subsequently generated encoded packet is restarted from the top of the path list, for example, the generated (n+1)th coded packet is allocated to the first path of the path list (path 1 shown in FIG. 1).
- the generated (n+2)th coded packet is allocated to the second path of the path list (path 2 shown in FIG. 1) for transmission, and so on.
- the coded packet may be allocated to the corresponding path by the scheduler on the source node side, and the coded packet is transmitted by the transmitter on the source node side.
- the scheduler and transmitter are functions inside the source node Module.
- the first intermediate node determines whether the occupancy rate of the first buffer queue allocated by the first data stream exceeds a threshold.
- the usage of the first cache queue is in the following form. Any of the following: space occupancy size, space occupancy percentage, space occupancy ratio.
- the threshold value indicates the maximum occupancy allowed by the first cache queue. Specifically, for example, the overall cache space of the first intermediate node is 10M, the storage space size of the first cache queue is 5M, and the storage space size of the second cache queue is 5M. Assuming that the usage of the first cache queue is expressed as a space occupation size, the threshold of the first cache queue is configured to be 4M.
- the threshold of the first cache queue is configured to be 80%. Assuming that the usage of the first cache queue is expressed as a space occupation ratio, the threshold of the first cache queue is set to 0.8.
- the first intermediate node discards the currently received coded packet when it determines that the occupancy rate of the first cache queue exceeds the threshold, compared to the prior art when the shared cache queue of the intermediate node overflows.
- the packet loss in the embodiment of the present invention may be referred to as Aggressive Dropping.
- the action of decoding the received encoded packet by using the fountain code decoding technology may be performed by a decoder on the destination node side.
- the decoder may be a functional module inside the destination node, or may be An encoder device that is independent of the destination node.
- the encoded packet carries information of the original packet.
- the first encoded packet shown in FIG. 1 is obtained based on the original packet 1 and the original packet 2, and the first encoded packet includes information capable of identifying the original packet 1 and the original packet 2. Then, the destination node can acquire all the original packets by decoding the received encoded packets.
- the size of the indication message sent by the destination node for instructing to stop transmitting the encoded packet of the first data stream is 1 bit.
- the receiving state of the data stream is generally fed back to the source node by using the ACK packet, and the transmission of the ACK packet occupies a certain network bandwidth resource.
- the 1-bit indication message is sent to the source node.
- the one-bit indication message used in the embodiment of the present invention effectively reduces the occupation of the network bandwidth, thereby facilitating the reduction of the received state of the first data stream. Network congestion.
- the destination node receives the encoded packet of the first data stream sent by the source node again within a preset duration after the sending the indication message, and the destination node again The indication message is sent to the source node until the encoded packet of the first data stream is no longer received within a preset duration after the indication message is sent.
- the destination node may send the indication message to the source node by using multiple paths, so that the probability that the indication message is successfully sent to the source node is improved, so that the source node receives the indication message as soon as possible and stops sending the first data stream. Avoid wasting network transmission resources by sending useless data.
- the indication message sent by the destination node is further used to indicate that the first data stream is discarded.
- the first forwarding node receives the indication message, and discards the first message according to the indication message. An encoded packet of the first data stream buffered in the cache queue.
- the destination node decodes all the original packets of the first data stream
- the coded packets of the first data stream existing in the network are actively discarded, which avoids invalid transmission and is beneficial to reduce network congestion.
- the message to be sent by the source node is divided into blocks of equal length, and each block is divided into several packets of equal length (in order to distinguish the encoded packets, the packet is recorded here).
- Original Packet the original packet is encoded using a fountain code encoding technique to form an encoded packet, and then the encoded packet is transmitted using multipath.
- L is used to represent the length of the message block (Block), and the unit is Bytes, assuming that the total rate is r and the unit is bps.
- the source node sends the encoded packet to the destination node by using multiple paths, and the wide bandwidth of the multipath can be reasonably utilized, thereby effectively improving the data transmission rate.
- the transmission of the first data stream may be based on a User Datagram Protocol (UDP).
- UDP User Datagram Protocol
- the source node sends the encoded packet of the first data stream
- the intermediate node forwards the encoded packet
- the destination node receives the encoded packet based on the UPD.
- the destination node may also send an indication message to the source node for instructing to stop sending the first data stream based on UDP.
- the destination node determines to decode all the original packets of the current data block, if the encoded packet of the same data block is received again, it is discarded, and is sent to the source node again for An indication message indicating that the transmission of the encoded packet of the current data block is stopped.
- the indication message sent by the destination node to the source node may be discarded during the transmission process and cannot reach the source node successfully.
- the destination node after sending the indication message, if the destination node receives the coded packet of the same data block within the preset time period from the sending time of the indication message, the destination node resends the indication message until the If the encoded packet of the same data block is no longer received within the preset duration, the indication message is stopped.
- the indication message of the embodiment of the present invention may also be referred to as a "STOP" signal.
- the source node stops transmitting the encoded packet of the first data stream upon receiving the indication message indicating that the transmission of the encoded packet of the first data stream is stopped.
- the next data stream can be sent subsequently, for example, the next data stream can be sent by the method of the embodiment of the present invention.
- the method for multi-path transmission of a data stream in the embodiment of the present invention may be referred to as Cloudburst, and the first data stream as a processing object may be referred to as a Cloudburst data stream.
- the embodiment of the present invention in the case that the occupancy rate of the first buffer queue allocated for the first data stream exceeds the threshold, the currently received encoded packet of the first data stream is discarded.
- the limit is the maximum occupancy allowed by the first cache queue.
- the active packet loss on the first data stream can reduce network congestion to a certain extent.
- the coding format of the first data stream is a fountain code, and the data transmission based on the fountain code can ensure the reliability of data transmission without retransmission. Therefore, the active packet loss of the first data stream does not cause the first The throughput loss of the data stream can still guarantee the reliable transmission of the first data stream.
- the embodiment of the present invention does not use a closed loop control loop, which avoids the feedback control existing in the existing method to additionally occupy network bandwidth resources. Therefore, the embodiment of the present invention can ensure reliable transmission of data streams and reduce network congestion without using a closed loop control loop, and can reduce implementation complexity compared to the prior art.
- the existing congestion avoidance technology schedules data streams through closed-loop control.
- a rate limit is performed at the network entry.
- the source node limits the data stream transmission rate after receiving network congestion information.
- the source node since the closed loop control loop is not used, the source node can always transmit the encoded packet of the first data stream at a fixed rate, and the encoded packet sent by the source node can be transmitted as long as there is no congestion in the path.
- the source node To the destination node, and the first intermediate node in the path takes the action of actively dropping packets when the occupancy rate of the first cache queue exceeds the threshold, which can effectively reduce network congestion, and thus, in the embodiment of the present invention, the source node
- the transmitted first data stream can reach the destination node with a small transmission delay. Therefore, compared to existing congestion avoidance techniques, The embodiments of the present invention can not only reduce network congestion without requiring a complicated control mechanism, but also reduce the data transmission delay to a certain extent.
- FIG. 4 shows a schematic block diagram of a network device 200 for transmitting a data stream between a source node and a destination node, the data stream including a first encoded form of a fountain code, in accordance with an embodiment of the present invention.
- Data stream, the network device 200 includes:
- the receiving module 210 is configured to receive the encoded packet sent by the source node or the intermediate node, where the encoded packet is a packet obtained by encoding the original packet of the first data stream by using a fountain code technology, where the intermediate node is located at the source node and the The destination node is used for data forwarding between the source node and the destination node;
- the processing module 220 is configured to discard the encoded packet received by the receiving module if the occupancy of the first cache queue exceeds a threshold, where the first cache queue is a cache allocated by the network device for the first data stream. Queue, the threshold value indicates the maximum occupancy allowed by the first cache queue.
- the embodiment of the present invention when the occupancy rate of the first buffer queue allocated for the first data stream exceeds a threshold, the currently received encoded packet of the first data stream is discarded, and the threshold is first.
- the maximum occupancy allowed by the cache queue can reduce network congestion to a certain extent by actively dropping packets on the first data stream.
- the coding format of the first data stream is a fountain code, and the data transmission based on the fountain code can ensure the reliability of data transmission without retransmission. Therefore, the active packet loss of the first data stream does not cause the first The throughput loss of the data stream can still guarantee the reliable transmission of the first data stream.
- the embodiment of the present invention does not use a closed loop control loop, which avoids the feedback control existing in the existing method to additionally occupy network bandwidth resources. Therefore, the embodiment of the present invention can ensure reliable transmission of data streams and reduce network congestion without using a closed loop control loop, and can reduce implementation complexity compared to the prior art.
- the processing module 220 is further configured to: when the occupancy of the first cache queue does not exceed the threshold, storing the encoded packet received by the receiving module into the In the first cache queue;
- the network device 200 further includes a first sending module, configured to send the encoded packet buffered in the first cache queue to the destination node.
- the receiving module 210 is further configured to receive an indication message, where the indication message is that the destination node decodes all the original packets of the first data stream based on the received encoded packet. Sending, the indication message is used to indicate that the source node stops sending the first data stream, and the indication message is 1 bit in size;
- the network device 200 further includes a second sending module, configured to send the indication message received by the receiving module to the source node.
- the indication message is further used to indicate that the first data stream is discarded, and the processing module 220 is further configured to discard the first cached in the first cache queue according to the indication message. Encoded packets of the data stream.
- the data stream further includes a second data stream that is not in the form of a fountain code
- the receiving module 210 is further configured to receive the group of the second data stream
- the processing module 220 is further configured to: store, by the receiving module, the packet of the second data stream into a second buffer queue, where the second buffer queue is a buffer queue allocated by the network device to the second data stream;
- the network device 200 further includes a third sending module, configured to send, to the destination node, a packet of the second data stream buffered in the second cache queue.
- the network device 200 may correspond to the first forwarding device in the method for transmitting a data stream in the embodiment of the present invention, and the foregoing and other operations and/or functions of the respective modules in the network device 200 are respectively In order to implement the corresponding processes of the respective methods in FIG. 2 and FIG. 3, for brevity, details are not described herein again.
- processing module 220 in network device 200 can be implemented by a processor or processor related component in network device 200.
- the receiving module 210 can be implemented by a receiver or a related component of the receiver in the network device 200.
- the first sending module, the second sending module, and the third sending module may be implemented by a transmitter or a related component of the transmitter in the network device 200.
- an embodiment of the present invention further provides a network device 300, configured to transmit a data stream between a source node and a destination node, where the data stream includes first data encoded in a fountain code.
- the network device 300 includes a processor 310, a memory 320, a receiver 340 and a transmitter 350, wherein the processor 310, the memory 320, the receiver 340, and the transmitter 350 communicate via an internal communication link, the memory 320
- the processor 310 is configured to execute instructions stored by the memory 320 to control the receiver 340 to receive signals and control the transmitter 350 to transmit signals.
- the receiver 340 is configured to receive the coded packet sent by the source node or the intermediate node, where the coded packet is a packet obtained by encoding the original packet of the first data stream by using a fountain code technology, where the intermediate node is located at the source node. And the destination node is used for data forwarding between the source node and the destination node; the processor 310 is configured to discard the transceiver receiving when the occupancy of the first cache queue exceeds a threshold
- the coded packet is a buffer queue allocated by the network device to the first data stream, and the threshold value indicates a maximum occupancy rate allowed by the first cache queue.
- the embodiment of the present invention when the occupancy rate of the first buffer queue allocated for the first data stream exceeds a threshold, the currently received encoded packet of the first data stream is discarded, and the threshold is first.
- the maximum occupancy allowed by the cache queue can reduce network congestion to a certain extent by actively dropping packets on the first data stream.
- the coding format of the first data stream is a fountain code, and the data transmission based on the fountain code can ensure the reliability of data transmission without retransmission. Therefore, the active packet loss of the first data stream does not cause the first The throughput loss of the data stream can still guarantee the reliable transmission of the first data stream.
- the embodiment of the present invention does not use a closed loop control loop, which avoids the feedback control existing in the existing method to additionally occupy network bandwidth resources. Therefore, the embodiment of the present invention can ensure reliable transmission of data streams and reduce network congestion without using a closed loop control loop, and can reduce implementation complexity compared to the prior art.
- the processor 310 is further configured to: when the occupancy of the first cache queue does not exceed the threshold, store the encoded packet received by the transceiver into the In the first cache queue, the sender 350 is configured to send the encoded packet buffered in the first cache queue to the destination node.
- the receiver 340 is further configured to receive an indication message, where the destination message is, in the case that all the original packets of the first data stream are decoded based on the received encoded packet.
- the indication message is used to indicate that the source node stops sending the first data stream, and the indication message is 1 bit.
- the transmitter 350 is further configured to send the indication message to the source node.
- the indication message is further used to indicate that the first data stream is discarded, and the processor 310 is further configured to discard the first cache queue according to the indication message received by the receiver 340.
- the data stream further includes a second data stream whose encoded form is not a fountain code.
- the receiver 340 is further configured to receive a packet of the second data stream;
- the processor 310 is configured to store, by the transceiver, the packet of the second data stream into a second buffer queue, where the second buffer queue is a buffer queue allocated by the network device for the second data stream;
- the transmitter 350 is further configured to send, to the destination node, a packet of the second data stream buffered in the second cache queue.
- the processor 310 may be a central processing unit ("CPU"), and the processor 310 may also be other general-purpose processors, digital signal processors (DSPs). , an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, and the like.
- the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
- the memory 320 can include read only memory and random access memory and provides instructions and data to the processor 310. A portion of the memory 320 may also include a non-volatile random access memory. For example, the memory 320 can also store information of the device type.
- each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 310 or an instruction in a form of software.
- the steps of the method disclosed in the embodiments of the present invention may be directly implemented as a hardware processor, or may be performed by a combination of hardware and software modules in the processor.
- the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
- the storage medium is located in the memory 320, and the processor 310 reads the information in the memory 320 and combines the hardware to perform the steps of the above method. To avoid repetition, it will not be described in detail here.
- the transmitter 350 may be a hardware circuit or device for implementing a transmitting function, such as an antenna, a network card, etc.
- the receiver 340 may also be a hardware circuit or device for implementing a transmitting function, such as an antenna, a network card, or the like.
- the embodiment of the invention is not limited.
- receiver 340 and the transmitter 350 can be implemented by a device having a transceiving function, such as a transceiver, in particular, an antenna.
- the network device 300 may correspond to the first forwarding device in the method for transmitting a data stream in the embodiment of the present invention, and may correspond to the network device 200 according to the embodiment of the present invention, and the network device 300
- the above and other operations and/or functions of the respective modules in order to implement the respective processes of the respective methods in FIG. 2 and FIG. 3 are omitted for brevity.
- FIG. 6 shows a schematic block diagram of a multipath transmission system 400 including a transmitting device 410, a receiving device 420 and a network device 430, the transmitting device 410 and the receiving device 420, according to an embodiment of the present invention.
- the network device 430 is a forwarding device in a plurality of paths, and the network device 430 corresponds to the first forwarding device in the method for transmitting a data stream according to the embodiment of the present invention.
- the network device 430 is also corresponding to the implementation of the present invention.
- Network device 200 or network device 300 is also corresponding to the implementation of the present invention.
- the embodiment of the present invention when the occupancy rate of the first buffer queue allocated for the first data stream exceeds a threshold, the currently received encoded packet of the first data stream is discarded, and the threshold is first.
- the maximum occupancy allowed by the cache queue can reduce network congestion to a certain extent by actively dropping packets on the first data stream.
- the coding format of the first data stream is a fountain code, and the data transmission based on the fountain code can ensure the reliability of data transmission without retransmission. Therefore, the active packet loss of the first data stream does not cause the first The throughput loss of the data stream can still guarantee the reliable transmission of the first data stream.
- the embodiment of the present invention does not use a closed loop control loop, which avoids the feedback control existing in the existing method to additionally occupy network bandwidth resources. Therefore, embodiments of the present invention can be used without using Under the premise of the closed-loop control loop, the reliable transmission of the data stream is ensured, and the network congestion is reduced, and the implementation complexity can be reduced compared with the prior art.
- the application scenario of the embodiment of the present invention is described by taking the data center network as an example.
- the embodiment of the present invention can also be applied to a communication scenario in which multiple physical paths exist in the terminal cloud communication using WiFi or Long Term Evolution (LTE), which is not limited in this embodiment of the present invention.
- LTE Long Term Evolution
- the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be implemented in the present application.
- the implementation of the examples constitutes any limitation.
- the disclosed systems, devices, and methods may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
- the technical solution of the present application which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
- the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
本申请提供一种传输数据流的方法与设备,该方法通过中间节点在源节点与目的节点之间传输数据流,该数据流包括编码形式为喷泉码的第一数据流,该方法包括:第一中间节点接收源节点或者至少一个第二中间节点发送的编码分组,该编码分组为利用喷泉码技术对该第一数据流的原始分组进行编码之后所得分组;在第一缓存队列的占用率超过门限值的情况下,该第一中间节点丢弃该编码分组,该第一缓存队列为该第一中间节点为该第一数据流分配的缓存队列,该门限值表示该第一缓存队列允许的最大占用率。因此,本发明能够在无需使用闭环控制回路的前提下,保障数据流的可靠传输,并减小网络拥塞,相比于现有技术能够降低实现复杂度。
Description
本申请要求于2016年6月7日提交中国专利局、申请号为201610404532.0、发明名称为“传输数据流的方法与设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本发明实施例涉及通信领域,并且更具体地,涉及一种传输数据流的方法与设备。
多路径传输系统指的是存在多条路径的通信系统,例如数据中心网络(Data Center Network,DCN)是一种典型的多路径传输系统,它将大量服务器连接起来,组织成具有多条传输路径的网络,是一个集计算、存储和联网为一体的新型网络。不同种类的业务的数据流在多路径传输系统中传输,有些业务,例如商业应用或金融交易(常见的如高频交易)等业务,对低时延可靠传输有着迫切的需求。因此,多路径传输系统中数据流的低时延可靠传输至关重要。
网络拥塞是影响低时延可靠传输的重要因素,因为网络拥塞会引起丢包,从而影响传输可靠性,同时网络拥塞也会增大传输时延。
现有技术通常基于重传机制和拥塞避免技术来保障数据流的低时延可靠传输。重传机制指的是在发生丢包后向数据流的发送端请求重新传输丢掉的数据包,以保证传输可靠性。具体地,检测多条路径上的丢包情况,然后向数据流的发送端进行反馈以触发发送端进行重传。拥塞避免技术的主要思想是选择多条路径中拥塞程度最低的路径传输数据流,以降低传输时延。具体地,检测多条路径上的网络拥塞情况,然后向数据流的发送端进行反馈以触发发送端进行相应的调度。
但是,由于现有的重传机制和拥塞避免技术都需要使用闭环控制回路来监测各路径的拥塞情况,在实现上较为复杂,而且闭环控制回路的反馈控制会额外占用网络带宽资源。
发明内容
本申请提供一种传输数据流的方法与设备,能够在无需使用闭环控制回路的前提下,保障数据流的可靠传输,并减小网络拥塞,相比于现有技术能够降低实现复杂度。
第一方面,提供了一种传输数据流的方法,所述方法通过中间节点在源节点与目的节点之间传输所述数据流,所述数据流包括编码形式为喷泉码的第一数据流,所述方法包括:第一中间节点接收所述源节点或者至少一个第二中间节点发送的编码分组,所述编码分组为利用喷泉码技术对所述第一数据流的原始分组进行编码之后所得分组;在第一缓存队列的占用率超过门限值的情况下,所述第一中间节点丢弃所述编码分组,所述第一缓存队列为所述第一中间节点为所述第一数据流分配的缓存队列,所述门限值表示所述第一缓存队列允许的最大占用率。
应理解,例如,所述第一数据流在一条传输路径中传输,具体地,所述第一数据流
从所述传输路径的源节点进入,经过所述传输路径中的一个或多个中间节点的传输,到达所述传输路径的目的节点。本申请技术方案中的所述第一中间节点可以指示所述第一数据流的传输路径中的一个中间节点或多个中间节点。还应理解,如果将所述第一数据流的传输路径上的源节点看作发送设备,中间节点看作转发设备,目的节点看作接收设备,则本申请技术方案中的所述第一中间节点也可以称为转发设备。具体地,所述第一中间节点例如可以为交换机或路由器等具有数据转发功能的网络设备。
在本申请技术方案中,在为第一数据流分配的第一缓存队列的占用率超过门限值的情况下,丢弃当前接收的该第一数据流的编码分组,该门限值为第一缓存队列允许的最大占用率,通过对该第一数据流进行主动丢包能够在一定程度上减小网络拥塞。该第一数据流的编码形式为喷泉码,而基于喷泉码的数据传输无需重传就能够保证数据传输的可靠性,因此,对该第一数据流进行主动丢包并不会造成该第一数据流的吞吐损失,依然可以保障该第一数据流的可靠传输。本申请技术方案并没有使用闭环控制回路,避免了现有方法中存在的反馈控制额外占用网络带宽资源。因此,本申请技术方案能够在无需使用闭环控制回路的前提下,保障数据流的可靠传输,并减小网络拥塞,相比于现有技术能够降低实现复杂度。
具体地,所述编码分组中携带用于指示所述第一数据流的标识,所述第一中间节点可以根据所述编码分组携带的标识,确定所述编码分组属于所述第一数据流,然后通过判断所述第一缓存队列的占用率是否超过门限值来决定丢弃还是缓存所述编码分组。
结合第一方面,在第一方面的第一种可能的实现方式中,所述传输数据流的方法应用于多路径传输系统中,所述多路径传输系统包括多条传输路径,所述多条传输路径用于传输所述数据流,所述第一中间节点表示所述多条传输路径中每条传输路径上的每个中间节点。
在本申请技术方案中,利用多条传输路径传输该第一数据流,能够有效提高该第一数据流的传输效率。对于多条传输路径中的每个中间节点,均为该第一数据流分配第一缓存队列,在该第一缓存队列的占用率超过门限值的情况下,主动丢弃当前接收的该第一数据流的编码分组,这样能够有效减小多条传输路径中每条传输路径的网络拥塞,从而能够降低该第一数据流的传输时延。该第一数据流的编码形式为喷泉码,能够保障该第一数据流的可靠传输。因此,在本申请技术方案中,能够保证数据流的可靠传输,同时能够有效减小网络拥塞以降低数据流的传输时延,从而能够满足多路径传输系统中数据流的低时延可靠传输。此外,本申请技术方案无需闭环控制回路,相比于现有技术,降低了实现复杂度。
结合第一方面或第一方面的第一种可能的实现方式,在第一方面的第二种可能的实现方式中,所述方法还包括:在所述第一缓存队列的占用率未超过所述门限值的情况下,所述第一中间节点将所述编码分组存储到所述第一缓存队列中;所述第一中间节点向目的节点发送所述第一缓存队列中缓存的所述编码分组。
结合第一方面或第一方面的第一种或第二种可能的实现方式,在第一方面的第三种可能的实现方式中,所述方法还包括:所述第一中间节点接收指示消息,所述指示消息为目的节点在基于接收到的编码分组解码出所述第一数据流的所有原始分组的情况下发送的,所述指示消息用于指示所述源节点停止发送所述第一数据流,所述指示消息大小为1比特;所述第一中间节点向所述源节点发送所述指示消息。
本申请技术方案中,在目的节点解码出所述第一数据流的所有原始分组的情况下,采用1比特的指示消息通知源节点停止发送该第一数据流,避免了源节点向网络中发送不必要的数据,同时采用1比特的指示消息向源节点反馈,相比于现有技术中采用ACK报文反馈,能够有效减小对网络带宽的占用。
结合第一方面的第三种可能的实现方式,在第一方面的第四种可能的实现方式中,所述指示消息还用于指示丢弃所述第一数据流,所述方法还包括:所述第一中间节点根据所述指示消息,丢弃所述第一缓存队列中缓存的所述第一数据流的编码分组。
在本申请技术方案中,在目的节点解码出第一数据流的所有原始分组的情况下,主动丢弃网络中存在的该第一数据流的编码分组,避免了无效传输,有利于减小网络拥塞。
结合第一方面或第一方面的第一种至第四种可能的实现方式中的任一种可能的实现方式,在第一方面的第五种可能的实现方式中,所述第一数据流可以指示所述多路径传输系统中的每一个业务数据流。
因此,本申请技术方案可以满足所述多路径传输系统中传输的全部数据流的低时延可靠传输的需求。
结合第一方面或第一方面的第一种至第四种可能的实现方式中的任一种可能的实现方式,在第一方面的第六种可能的实现方式中,所述数据流还包括编码形式不为喷泉码的第二数据流,所述方法还包括:所述第一中间节点接收所述第二数据流的分组;所述第一中间节点将所述第二数据流的分组存储到第二缓存队列中,所述第二缓存队列为所述第一中间节点为所述第二数据流分配的缓存队列;所述第一中间节点向目的节点发送所述第二缓存队列中缓存的所述第二数据流的分组。
在多路径传输系统中可以部署多种不同种类的业务,这些不同种类的业务对应于多路径传输系统中传输的多种多样的数据流。在多路径传输系统中部署的业务中,有些业务,例如商业应用或金融交易(常见的如高频交易)业务,对端到端的传输时延有较严格的要求,则这些业务对应的数据流对低时延可靠传输有迫切需求。针对不同的业务要求,在多路径传输系统中传输的数据流分为高优先级的流(例如时延敏感的流)和低优先级的流。高优先级的流对低时延可靠传输有迫切的需求。目前,针对上述问题的解决方案有区分流优先级技术,其主要思想是转发设备总是优先处理共享缓存队列中高优先级的流,以保障高优先级的流的传输性能。但是,这种区分流优先级技术可能会导致低优先级的流的饿死。
在本申请的第四种可能的实现方式中,可以将多路径传输系统中高优先级的流(例如时延敏感的流)作为所述第一数据流进行操作,将低优先级的流作为所述第二数据流进行操作。所述第一转发设备为所述第一数据流分配第一缓存队列,为所述第二数据流分配第二缓存队列,所述第一缓存队列仅用于缓存所述第一数据流,所述第二缓存队列用于缓存所述第二数据流。换句话说,所述第一转发设备将所述第一数据流与第二数据流分开缓存,则在通过对所述第一数据流进行喷泉码操作以及主动丢包操作以实现第一数据流的低时延可靠传输的同时,可以较大程度地避免对所述第二数据流的影响,从而应该不会出现现有的区分流优先级技术中导致低优先级的流饿死的问题。因此,相对于现有的区分流优先级技术,本申请的技术方案在实现高优先级的流(对应于所述第一数据流)的低时延可靠传输的基础上,可以避免低优先级的流(对应于所述第二数据流)出现饿死的现象,保障了数据流之间的公平性。
第二方面提供了一种接收数据流的方法,所述方法通过中间节点在源节点与目的节点之间传输所述数据流,所述方法包括:所述目的节点通过中间节点接收所述源节点发送的第一数据流的编码分组,所述编码分组为利用喷泉码技术对所述第一数据流的原始分组进行编码所得分组;所述目的节点对所述编码分组进行解码,获得对应的原始分组;在解码出所述第一数据流的所有原始分组的情况下,所述目的节点向所述源节点发送指示消息,所述指示消息用于指示所述源节点停止发送所述第一数据流,所述指示消息的大小为1比特。
本申请技术方案中,在目的节点解码出所述第一数据流的所有原始分组的情况下,采用1比特的指示消息通知源节点停止发送该第一数据流,避免了源节点向网络中发送不必要的数据,同时采用1比特的指示消息向源节点反馈,相比于现有技术中采用ACK报文反馈,能够有效减小对网络带宽的占用。
结合第二方面,在第二方面的第一种可能的实现方式中,所述方法还包括:所述目的节点在发送所述指示消息之后的预设时长内,如果再次接收到所述第一数据流的编码分组,继续向所述源节点发送所述指示消息,直至在发送所述指示消息之后的所述预设时长内不再接收到所述第一数据流的编码分组为止。
本申请技术方案能够保证所述指示消息成功到达源节点,从而使得源节点停止发送所述第一数据流的编码分组。
结合第二方面或第二方面的第一种可能的实现方式,在第二方面的第二种可能的实现方式中,所述方法还包括:所述目的节点在发送所述指示消息之后的预设时长内,如果再次接收到所述第一数据流的编码分组,丢弃当前接收到的编码分组。
结合第二方面,所述目的节点基于用户数据报协议(User Datagram Protocol,UDP)向所述源节点发送所述指示消息。
第三方面提供了一种网络设备,所述网络设备用于在源节点和目的节点之间传输数据流,所述数据流包括编码形式为喷泉码的第一数据流,所述网络设备用于执行上述第一方面或第一方面的任一可能的实现方式中的方法。具体地,所述网络设备可以包括用于执行第一方面或第一方面的任一可能的实现方式中的方法的模块。具体地,所述网络设备对应于第一方面或第一方面的任一可能的实现方式中的方法中的所述第一中间节点。
第四方面提供了一种网络设备,所述网络设备用于在源节点和目的节点之间传输数据流,所述数据流包括编码形式为喷泉码的第一数据流,所述网络设备包括存储器和处理器,该存储器用于存储指令,该处理器用于执行该存储器存储的指令,并且对该存储器中存储的指令的执行使得该处理器执行第一方面或第一方面的任一可能的实现方式中的方法。
第五方面提供了一种多路径传输系统,所述多路径传输系统包括发送设备、接收设备和网络设备,所述发送设备与所述接收设备存在多条路径,所述网络设备为多条路径中的转发设备,所述网络设备对应于第三方面或第四方面提供的网络设备,所述网络设备还对应于第一方面或第一方面的任一可能的实现方式中的方法中的第一中间节点,所述发送设备对应于第一方面或第一方面的任一可能的实现方式中的方法中的源节点,所述接收设备对应于第一方面或第一方面的任一可能的实现方式中的方法中的目的节点。
在上述各个实现方式中,所述第一数据流可以是时延敏感的流。具体地,例如,所
述第一数据流为数据中心网络(Data Center Network,DCN)中对传输时延有较严格的要求的短流。
在上述各个实现方式中,所述第一缓存队列的占用率的表现形式为下列形式中的任一种:空间占用大小、空间占用百分比、空间占用比例。所述门限值表示所述第一缓存队列允许的最大占用率。具体地,例如,所述第一转发设备的整体缓存空间为10M,配置所述第一缓存队列的存储空间大小为5M,配置第二缓存队列的存储空间大小为5M。假设定义所述第一缓存队列的占用率的表现形式为空间占用大小,则配置所述第一缓存队列的门限值为4M。假设定义所述第一缓存队列的占用率的表现形式为空间占用百分比,则配置所述第一缓存队列的门限值为80%。假设定义所述第一缓存队列的占用率的表现形式为空间占用比例,则配置所述第一缓存队列的门限值为0.8。
基于上述技术方案,在为第一数据流分配的第一缓存队列的占用率超过门限值的情况下,丢弃当前接收的该第一数据流的编码分组,该门限值为第一缓存队列允许的最大占用率,通过对该第一数据流进行主动丢包能够在一定程度上减小网络拥塞。该第一数据流的编码形式为喷泉码,而基于喷泉码的数据传输无需重传就能够保证数据传输的可靠性,因此,对该第一数据流进行主动丢包并不会造成该第一数据流的吞吐损失,依然可以保障该第一数据流的可靠传输。本申请技术方案并没有使用闭环控制回路,避免了现有方法中存在的反馈控制额外占用网络带宽资源。因此,本申请技术方案能够在无需使用闭环控制回路的前提下,保障数据流的可靠传输,并减小网络拥塞,相比于现有技术能够降低实现复杂度。
图1示出了本发明实施例的一个应用场景的示意图。
图2示出了本发明实施例的传输数据流的方法的示意图。
图3示出了本发明实施例的传输数据流的方法的示意性流程图。
图4示出了根据本发明实施例提供的网络设备的示意性框图。
图5示出了根据本发明实施例提供的网络设备的另一示意性框图。
图6示出了根据本发明实施例提供的多路径传输系统的示意性框图。
下面将结合附图,对本发明实施例进行描述。
本发明实施例的应用场景为多路径传输系统,数据中心网络为典型的多路径传输系统。具体地,图1示出了本发明实施例的一个具体应用场景:数据中心网络(DCN)的叶脊(Leaf-Spine)架构。如图1所示,Leaf-Spine架构由服务器和多级交换机/路由器(如图1中所示的核心层、汇聚层和边缘层的交换机/路由器)。以交换机为例,Leaf-Spine架构包括核心交换机、汇聚交换机、边缘交换机和服务器。核心交换机连接汇聚交换机,不同核心交换机之间也互相连接;汇聚交换机既连接核心交换机,也连接边缘交换机,不同汇聚交换机之间也互相连接,汇聚交换机称为脊(Spine)交换机;边缘交换机既连接汇聚交换机,也连接服务器,边缘交换机称为叶(Leaf)交换机。应理解,一个服务
器通过与边缘交换机连接,可以接入网络,从而可以与网络中其他服务器建立通信连接。从图1可知,在Leaf-Spine架构的任意两个不同服务器之间存在多条传输路径,从而可以提供较多的路径选项,可以将流量在多条传输路径之间进行分散。应理解,图1中的服务器也可称为主机。
在DCN中,存在东西流量与南北流量,其中,在DCN内部主要是东西流量,在不同DCN之间主要是南北流量,东西流量占主导地位,大约占DCN总流量的67%。东西流量又分为短流和长流,其中,短流一般是指长度为几十KB的流量。短流对端到端的传输时延有较严格的要求,以高频交易为例,一个高频交易消息的往返时延(Round Trip Time,RTT)需要在30毫秒范围内完成,如果超过这个范围,该高频交易消息就失效了,从而导致交易上的损失。因此,短流的低时延可靠传输是DCN内部一个亟待解决的技术问题。
现有技术通常基于重传机制和拥塞避免技术来保障数据流的低时延可靠传输。由于现有的重传机制和拥塞避免技术都需要使用闭环控制回路来监测各路径的拥塞情况,在实现上较为复杂,而且闭环控制回路的反馈控制会额外占用网络带宽资源。
针对上述技术问题,本发明实施例提出一种传输数据流的方法与设备,能够在无需使用闭环控制回路的前提下,保障数据流的可靠传输,并减小网络拥塞,相比于现有技术能够降低实现复杂度。
图2示出了根据本发明实施例提供的传输数据流的方法的示意图。如图2所示,源节点与目的节点之间存在n条传输路径(如图2中所示的路径1,路径2,…,路径n),每条传输路径中包括至少一个中间节点,如图2示意的路径1中的第一、二、三中间节点。具体地,例如该源节点对应于图1所示架构中的一个服务器,该目的节点对应于图1所示架构中的另一个服务器,在这两个服务器之间存在多条传输路径。对应地,图2中所示的n条路径中的中间节点可以是交换机或者路由器或者服务器,例如对应于图1所示架构中的某些交换机或者路由器。
该源节点利用这n条传输路径向该目的节点传输数据块A,该数据块A的编码形式为喷泉码。具体地,该源节点将数据块A分割为k个分组(Packet),然后利用喷泉码技术对这k个分组进行编码,得到编码数据。为了便于区分与描述,在本文中,将对数据块分割后得到的分组记为原始分组(Original Packet),将利用喷泉码技术对原始分组进行编码后的得到的编码数据记为编码分组(Encoded Packet)。如图2中所示,分割数据块A得到k个原始分组,利用喷泉码技术编码这k个原始分组得到多个编码分组(由于画图的局限,图2中仅示意出n个编码分组),例如,对原始分组1和2编码得到图2中所示的第1个编码分组,对原始分组2进行编码得到图2中所示的第1个编码分组,对原始分组1和k进行编码得到图2中所示的第3个编码分组,等等。该源节点利用n条路径向目的节点发送编码得到的编码分组。
该目的节点通过该n条路径接收源节点发送的编码分组,然后利用喷泉码技术对接收到的编码分组进行解码,得到对应的原始分组,当全部解码出该k个原始分组时,则目的节点获取数据块A,即完成了数据块A从该源节点到该目的节点的传输。
将图2中所示的数据块A对应的数据流记为第一数据流,n条传输路径中的中间节点用于传输该第一数据流,换句话说,用于转发该数据块A的编码分组,最终实现将数据块A传输到该目的节点。以图2所示的第一中间节点为例,该第一中间节点为该第一
数据流分配第一缓存队列,如图2中路径1的放大图所示,该第一缓存队列专用于缓存该第一数据流。例如,该第一中间节点接收第二中间节点发送的编码分组,该第一中间节点确定该编码分组属于该第一数据流,然后判断该第一缓存队列的占用率是否超过门限值,该门限值为第一缓存队列允许的最大占用率。若该第一缓存队列的占用率超过门限值,则将该编码分组丢弃,否则,将该编码分组缓存到该第一缓存队列中,后续向第三中间节点发送该第一缓存队列中缓存的编码分组。该第三中间节点后续再向下一跳中间节点转发接收到的编码分组,以此类推,直至将编码分组发送至该目的节点。
应理解,可以将源节点看作用于首次发出第一数据流的发送设备,中间节点可以看作用于转发第一数据流的转发设备,目的节点可以看作用于最终接收第一数据流且不再转发的接收设备。具体地,本发明实施例中的中间节点例如可以为交换机或路由器等具有数据转发功能的网络设备。
需要说明的是,本发明实施例涉及到的源节点可以是服务器或者终端设备(如个人电脑、手持式终端等),目的节点可以是服务器或者终端设备(如个人电脑、手持式终端等),中间节点可以是服务器或者交换机或者路由器或者具有转发功能的终端设备(如个人电脑、手持式终端等)。
在本发明实施例中,采用了喷泉码(Fountain Codes)技术。所谓喷泉码,是指发送端随机编码,由k个原始分组生成任一数量的编码分组,发送端在不知道这些编码分组是否被成功接收的情况下,会持续发送编码分组。而接收端只要收到k(1+e)个编码分组的任意子集,就可通过译码以高概率(和e有关)成功地恢复全部原始分组。
喷泉码可以分为随机线性喷泉码、LT(Luby Transform)码和Raptor码。LT码是第一种具有实用性能的喷泉码方案,LT码的编译码方法为:在发送端按一定的度(d)分布从k个原始分组中随机选取d个原始分组,然后所选取的d个原始分组进行异或运算,得到编码分组,向接收端发送编码分组。接收端只需接收n(n大于k)个编码分组后,就能以不低于(1-e)的概率恢解码出k个原始分组,e为接收端对编码分组的不可恢复概率。e随着n的增大递减。当n趋于无穷时(即接收端收到无限多的编码分组),e趋于0。合理的度数分布是LT码性能的关键。根据LT码的编译码理论分析,输入数据量在104以上时,需要5%的冗余信息就能保证较高的译码成功率。源节点按选取的编码算法将数据块的所有原始分组随机分散在各个编码分组内,向目的节点持续“喷射”编码分组,像喷泉一样,无需知道这些编码分组是否被目的节点成功接收;目的节点只要接收到足够多的编码分组(比原始分组的个数多),就能够解码出所有原始分组,从而恢复出数据块。实验数据显示当目的节点接收到的编码分组的个数是原始分组的个数的1.704倍(平均值)时,目的节点就能够解码出所有原始分组。应理解,这个倍数与k、d、网络路径拥塞程度有关。应理解,喷泉码在发送期间若发生丢包时,无需向源节点反馈接收状态,即无需通知源节点重新传输丢弃的包。需要说明的是,对于喷泉码,在目的节点解码出所有原始分组时,需要向源节点反馈接收状态以指示源节点停止发送编码分组。
本发明实施例采用喷泉码技术处理第一数据流,能够有效保证第一数据流的可靠传输。此外,应理解,与传统重传机制相比,喷泉码技术不需要反馈信道,只需前向链路,则能够避免传统重传机制存在的反馈回路对带宽资源的占用,因此,本发明实施例相对于现有的重传机制,在保证数据可靠传输的基础上,在一定程度上还能有利于减小网络拥塞。
在本发明实施例中,在为第一数据流分配的第一缓存队列的占用率超过门限值的情况下,丢弃当前接收的该第一数据流的编码分组,该门限值为第一缓存队列允许的最大占用率,通过对该第一数据流进行主动丢包能够在一定程度上减小网络拥塞。该第一数据流的编码形式为喷泉码,而基于喷泉码的数据传输无需重传就能够保证数据传输的可靠性,因此,对该第一数据流进行主动丢包并不会造成该第一数据流的吞吐损失,依然可以保障该第一数据流的可靠传输。本发明实施例并没有使用闭环控制回路,避免了现有方法中存在的反馈控制额外占用网络带宽资源。因此,本发明实施例能够在无需使用闭环控制回路的前提下,保障数据流的可靠传输,并减小网络拥塞,相比于现有技术能够降低实现复杂度。本发明实施例中的第一数据流可以为时延敏感的流,具体地,例如数据中心网络中对时延要求严格的短流。
可选地,在本发明实施例中,所述传输数据流的方法应用于多路径传输系统中,所述多路径传输系统包括多条传输路径,所述多条传输路径用于传输所述数据流,所述第一中间节点表示所述多条传输路径中的每一个中间节点。
具体地,例如在图2所示场景中,n条路径中每条路径包括的所有中间节点均具有图2中所示第一中间节点的结构与功能,即对于n条路径中的每个中间节点,均为该第一数据流分配第一缓存队列,在该第一缓存队列的占用率超过门限值的情况下,主动丢弃当前接收的该第一数据流的编码分组,这样能够有效减小多条传输路径中每条传输路径的网络拥塞,从而能够降低该第一数据流的传输时延。
应理解,多路径传输系统中可以包括多个源节点和多个目的节点,源节点与目的节点之间的对应关系可以由具体场景下的网络拓扑确定。作为示例而非限定,本发明实施例仅以图2所示的一个源节点和一个目的节点为例进行描述。
可选地,在本发明实施例中,该第一数据流可以指示多路径传输系统中的每一个业务数据流。
具体地,将多路径传输系统中的每一个业务的数据流均按照该第一数据流的处理方式来处理,因此,能够满足多路径传输系统中传输的全部数据流的低时延可靠传输的需求。
在多路径传输系统中可以部署多种不同种类的业务,这些不同种类的业务对应于多路径传输系统中传输的多种多样的数据流。在多路径传输系统中部署的业务中,有些业务对端到端的传输时延有较严格的要求,则这些业务对应的数据流对低时延可靠传输有迫切需求。针对不同的业务要求,在多路径传输系统中传输的数据流分为高优先级的流(例如时延敏感的流)和低优先级的流。高优先级的流对低时延可靠传输有迫切的需求。目前,针对上述问题的解决方案有区分流优先级技术,其主要思想是总是优先处理共享缓存队列中高优先级的流,以保障高优先级的流的传输性能。但是,这种区分流优先级技术可能会导致低优先级的流的饿死。
可选地,在本发明实施例中,如图2所示,第一中间节点为第二数据流分配第二缓存队列,该第二数据流为非喷泉码处理的数据流。第一中间节点接收该上一跳网络节点(如图2中所示的第三转发节点)发送的第二数据流的分组。该第一中间节点将第二数据流的分组存储到第二缓存队列中。第一中间节点向下一跳网络节点(如图2中所示的第二转发节点)发送第二缓存队列中缓存的第二数据流的分组。
不同于传统技术,在本发明实施例的第一中间节点中,第一数据流与第二数据流不
再共享同一个缓存队列,该第一中间节点为第一数据流分配第一缓存队列,为第二数据流分配第二缓存队列,将接收到的第一数据流的分组缓存到第一缓存队列中,将接收到的第二数据流的分组缓存到第二缓存队列中。需要说明的是,该第一缓存队列与第二缓存队列为不同的缓存队列,但第一缓存队列与该第二缓存队列共享一个物理缓存空间。
具体地,在本发明实施例中,该第一数据流例如为多路径传输系统中的高优先级的流,该第二数据流例如为多路径传输系统中的低优先级的流。更具体地,该第一数据流为数据中心网络中的短流,该其他数流为数据中心网络中的长流。
在本发明实施例中,该第一中间节点为该第一数据流分配第一缓存队列,为该第二数据流分配第二缓存队列,该第一缓存队列仅用于缓存该第一数据流,该第二缓存队列用于缓存该第二数据流。换句话说,该第一中间节点将该第一数据流与第二数据流分开缓存,则在通过对该第一数据流进行喷泉码操作以及主动丢包操作以实现第一数据流的低时延可靠传输的同时,可以较大程度地避免对该第二数据流的影响,从而应该不会出现现有的区分流优先级技术中导致低优先级的流饿死的问题。因此,相对于现有的区分流优先级技术,本发明实施例在实现高优先级的流(对应于该第一数据流)的低时延可靠传输的基础上,可以避免低优先级的流(对应于该第二数据流)出现饿死的现象,保障了数据流之间的公平性。
图3示出了本发明实施例提供的传输数据流的方法100的示意性流程图,图3中的源节点对应于图2中的源节点,图3中的目的节点对应于图2中的目的节点,图3中的第一中间节点对应于图2中的第一中间节点,该方法100包括:
S110,源节点对待发送的第一数据流的一个数据块(对应图2所示的源节点侧的数据块A)进行分割,形成k个原始分组;然后利用喷泉码技术对所述k个原始分组进行编码,获得m个编码分组,m大于k。应理解,为了便于画图与理解,图1中仅示意性地示出即将进入n个路径的n个编码分组。为了便于路径上的中间节点识别第一数据流,源节点为第一数据流的每个编码分组均打上用于指示该第一数据流的标识。具体地,源节点为m个编码分组中的每个编码分组打上某种固定业务流的标签,路径中的中间节点根据这种标签就能够识别出是该第一数据流。
S120,源节点通过多条路径(对应图1中所示的n条路径),向目的节点发送携带用于指示第一数据流的标识的编码分组。
S130,多条路径中的第一中间节点接收上一跳网络节点发送的编码分组,根据编码分组携带的标识,确定该编码分组属于第一数据流。
应理解,该上一跳网络节点可能是源节点,也可能是第一中间节点所在路径的上一跳中间节点。例如在图1所示示例场景中,该上一跳网络节点对应第三中间节点。
S140,第一中间节点判断为第一数据流分配的第一缓存队列(对应图1所示的第一缓存队列)的占用率是否超过门限值,若是,转到S150,若否,转到S160。
S150,第一中间节点确定第一缓存队列的占用率超过门限值,丢弃该编码分组。
S160,第一中间节点确定第一缓存队列的占用率未超过门限值,将该编码分组存入该第一缓存队列。
S170,第一中间节点向目的节点发送第一缓存队列中的编码分组。
应理解,如果该第一中间节点与目的节点物理直连,该第一中间节点可以直接向目的节点发送该编码分组,否则,该第一中间节点通过该第一中间节点所在路径上的其他
转发节点间接地向目的节点发送该编码分组。
S180,目的节点通过多条路径(对应图1中所示的n条路径)接收源节点发送的编码分组,利用喷泉码解码技术对接收到的编码分组进行解码,并判断是否解码出第一数据流的所有的原始分组,例如图1中源节点侧的数据块的k个原始分组,若是,转到S190,若否,转到S170。
S190,目的节点在确定解码出第一数据流的所有的原始分组的情况下,向源节点发送用于指示停止发送第一数据流的指示消息。应理解,第一中间节点接收到该指示消息后,向源节点发送该指示消息。
在本发明实施例中,在为第一数据流分配的第一缓存队列的占用率超过门限值的情况下,丢弃当前接收的该第一数据流的编码分组,该门限值为第一缓存队列允许的最大占用率,通过对该第一数据流进行主动丢包能够在一定程度上减小网络拥塞。该第一数据流的编码形式为喷泉码,而基于喷泉码的数据传输无需重传就能够保证数据传输的可靠性,因此,对该第一数据流进行主动丢包并不会造成该第一数据流的吞吐损失,依然可以保障该第一数据流的可靠传输。本发明实施例并没有使用闭环控制回路,避免了现有方法中存在的反馈控制额外占用网络带宽资源。因此,本发明实施例能够在无需使用闭环控制回路的前提下,保障数据流的可靠传输,并减小网络拥塞,相比于现有技术能够降低实现复杂度。
具体地,在S110中,利用喷泉码技术对所述k个原始分组进行编码的动作,可以由源节点侧的编码器执行。具体地,该编码器的中央处理器(CPU,Central Processing Unit)可以是单核CPU,则可以串行输出编码分组;该编码器的CPU也可以是多核CPU,则可以并行输出编码分组。未来,基于现场可编程门阵列(Field-Programmable Gate Array,FPGA)的网卡(网络适配器)(Network Interface Card,NIC),可以利用硬件实现编码与解码的并行处理。应理解,该编码器可以是源节点中的一个功能模块,也可以是独立于源节点的编码器设备。
在本发明实施例中,第一数据流的具体编码方式可以采用LT(Luby Transform)编码方式,LT码是一种可实用的喷泉码方式。除此之外,还可采用其他喷泉码编码方式,本发明实施例对此不作限定。
具体地,在S120中,源节点采用平均分配与轮询机制利用多条路径持续向目的节点发送第一数据流的编码分组,其中轮询的形式,没有内容或者顺序上的严格对应。如图1所示,将源节点与目的节点之间的n条路径记为路径列表。源节点将第一个编码分组,分配至路径列表的第一个路径(图1中所示的路径1)上进行发送,产生的第二个编码分组,分配至路径列表的第二个路径(图1中所示的路径2)上进行发送,产生的第三个编码分组,分配至路径列表的第三个路径(图1中所示的路径3)上进行发送,以此类对,产生的第n个编码分组,分配至路径列表的第n个路径(图1中所示的路径n)上进行发送,这时已经到达路径列表的底端。后续产生的编码分组,再从路径列表的顶端重新开始分配,例如,产生的第(n+1)个编码分组分配到路径列表的第一个路径(图1中所示的路径1)上进行发送,产生的第(n+2)个编码分组分配到路径列表的第二个路径(图1中所示的路径2)上进行发送,以此类推。
具体地,在步骤S120中,可以由源节点侧的调度器将编码分组分配到对应的路径上,由源节点侧的发送器将编码分组发送出去。该调度器与发送器为源节点内部的功能
模块。
在步骤S140中第一中间节点判断为第一数据流分配的第一缓存队列的占用率是否超过门限值,在本发明实施例中,该第一缓存队列的占用率的表现形式为下列形式中的任一种:空间占用大小、空间占用百分比、空间占用比例。该门限值表示该第一缓存队列允许的最大占用率。具体地,例如,该第一中间节点的整体缓存空间为10M,配置该第一缓存队列的存储空间大小为5M,配置第二缓存队列的存储空间大小为5M。假设定义该第一缓存队列的占用率的表现形式为空间占用大小,则配置该第一缓存队列的门限值为4M。假设定义该第一缓存队列的占用率的表现形式为空间占用百分比,则配置该第一缓存队列的门限值为80%。假设定义该第一缓存队列的占用率的表现形式为空间占用比例,则配置该第一缓存队列的门限值为0.8。
在步骤S140中,第一中间节点在确定第一缓存队列的占用率超过门限值的情况下,丢弃当前接收到的编码分组,相对于现有技术中在中间节点的共享缓存队列溢满时发生丢包,本发明实施例中的丢包可以称为主动丢包(Aggressive Dropping)。
具体地,在S180中,利用喷泉码解码技术对接收到的编码分组进行解码的动作可以由目的节点侧的解码器来执行,具体地,解码器可以是目的节点内部的功能模块,也可以是独立于目的节点的一个编码器设备。
应理解,编码分组中携带有原始分组的信息。例如图1中所示的第一个编码分组是基于原始分组1与原始分组2进行编码得到的,则该第一个编码分组中包括能够标识原始分组1与原始分组2的信息。则目的节点通过解码接收到的编码分组,就能够获取到所有的原始分组。
具体地,在S190中,目的节点发送的用于指示停止发送第一数据流的编码分组的指示消息的大小为1比特。
在现有技术中,通常通过ACK报文向源节点反馈数据流的接收状态,ACK报文的传输会占用一定的网络带宽资源,在本发明实施例中,通过1比特的指示消息向源节点反馈目的节点对第一数据流的接收状态,相比于现有技术中的ACK报文,本发明实施例中采用的1比特的指示消息有效降低了对网络带宽的占用,从而有利于减小网络拥塞。
可选地,在本发明实施例中,在S190中,目的节点在发送该指示消息之后的预设时长内,如果再次接收到源节点发送的该第一数据流的编码分组,则目的节点再次向源节点发送该指示消息,直至在发送该指示消息之后的预设时长内不再接收到该第一数据流的编码分组为止。
具体地,目的节点可以利用多条路径向源节点发送该指示消息,这样能够提高该指示消息成功发送至源节点的概率,使得源节点尽早接收到该指示消息并停止发送该第一数据流,避免由于发送无用的数据而浪费网络传输资源。
可选地,在本发明实施例中,目的节点发送的指示消息还用于指示丢弃所述第一数据流,S190中,该第一转发节点接收该指示消息,并根据该指示消息丢弃第一缓存队列中缓存的第一数据流的编码分组。
应理解,在目的节点解码出第一数据流的所有原始分组的情况下,主动丢弃网络中存在的该第一数据流的编码分组,避免了无效传输,有利于减小网络拥塞。
应理解,源节点将要发送的消息(Message)分成等长的数据块(Block),每个数据块内又分成几个等长的分组(Packet)(为了区分编码后的分组,这里的分组记为原始
分组),利用喷泉码编码技术对原始分组进行编码,形成编码分组,然后利用多路径发送编码分组。例如利用L代表消息块(Block)的长度,单位为Bytes,假设总速率为r,单位为bps。有n条可用路径被用来传输长度为L Bytes的消息,假设n条路径分别传输数据流的传输速率为r1,r2,…,rn,则L bytes的数据块在多条路径上传输的总速率为r=r1+r2+…+rn,其中ri代表第i条路径的速率。因此,在本发明实施例中,源节点利用多条路径向目的节点发送编码分组,能够合理利用多路径的宽带宽,有效提高数据传输速率。
可选地,在本发明实施例中,第一数据流的传输可以基于用户数据报协议(User Datagram Protocol,UDP)。具体地,源节点发送第一数据流的编码分组、中间节点转发该编码分组、以及目的节点接收该编码分组均基于UPD。此外,目的节点也可以基于UDP向源节点发送用于指示停止发送第一数据流的指示消息。
应理解,在本发明实施例中,在目的节点确定解码出当前数据块的所有的原始分组之后,如果再次接收到同一数据块的编码分组后,将其丢弃,并再次向源节点发送用于指示停止发送当前数据块的编码分组的指示消息。
还应理解,目的节点向源节点发送的指示消息可能会在传输的过程中被丢弃,无法成功到达源节点。在本发明实施例中,目的节点在发送该指示消息后,如果在指示消息的发送时刻起的预设时长内又收到同一数据块的编码分组,则重新发送该指示消息,直至在所述预设时长内不再收到同一数据块的编码分组,则停止发送该指示消息。本发明实施例的该指示消息也可称之为“STOP”信号。
还应理解,源节点在接收到用于指示停止发送第一数据流的编码分组的指示消息后,停止发送该第一数据流的编码分组。后续可以开始发送下一个数据流,例如可以采用本发明实施例的方法发送该下一个数据流。
本发明实施例的多路径传输数据流的方法可以称为Cloudburst,作为处理对象的第一数据流可以称为Cloudburst数据流。
综上所述,在本发明实施例中,在为第一数据流分配的第一缓存队列的占用率超过门限值的情况下,丢弃当前接收的该第一数据流的编码分组,该门限值为第一缓存队列允许的最大占用率,通过对该第一数据流进行主动丢包能够在一定程度上减小网络拥塞。该第一数据流的编码形式为喷泉码,而基于喷泉码的数据传输无需重传就能够保证数据传输的可靠性,因此,对该第一数据流进行主动丢包并不会造成该第一数据流的吞吐损失,依然可以保障该第一数据流的可靠传输。本发明实施例并没有使用闭环控制回路,避免了现有方法中存在的反馈控制额外占用网络带宽资源。因此,本发明实施例能够在无需使用闭环控制回路的前提下,保障数据流的可靠传输,并减小网络拥塞,相比于现有技术能够降低实现复杂度。
此外,现有的拥塞避免技术通过闭环控制来调度数据流,在检测到网络拥塞时,会在网络入口处进行限速,例如源节点在接收到网络拥塞信息后会限制数据流的发送速率。而在本发明实施例中,由于没有采用闭环控制回路,因此源节点可以一直保持固定速率发送第一数据流的编码分组,则只要路径不存在拥塞,则源节点发送的编码分组就可以被传输到目的节点,而且,路径中的第一中间节点在第一缓存队列的占用率超过门限值时采取主动丢包的动作,可以有效减小网络拥塞,从而在本发明实施例中,源节点发送的第一数据流可以以较小的传输时延到达目的节点。因此,相对于现有的拥塞避免技术,
本发明实施例不仅可以在无需复杂控制机制的情况下减少网络拥塞,而且可以在一定程度上,降低数据传输时延。
还应理解,图2和图3所示的例子是为了更好地帮助本领域技术人员更好地理解本发明实施例,而非将本发明实施例限于这些具体的形式。本领域技术人员根据所给出的图2和图3的例子,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本发明实施例的范围内。
图4示出了根据本发明实施例提供的网络设备200的示意性框图,该网络设备200用于在源节点和目的节点之间传输数据流,该数据流包括编码形式为喷泉码的第一数据流,该网络设备200包括:
接收模块210,用于接收该源节点或者中间节点发送的编码分组,该编码分组为利用喷泉码技术对该第一数据流的原始分组进行编码之后所得分组,该中间节点位于该源节点与该目的节点之间,用于该源节点与该目的节点之间的数据转发;
处理模块220,用于在第一缓存队列的占用率超过门限值的情况下,丢弃该接收模块接收的该编码分组,该第一缓存队列为该网络设备为该第一数据流分配的缓存队列,该门限值表示该第一缓存队列允许的最大占用率。
在本发明实施例中,在为第一数据流分配的第一缓存队列的占用率超过门限值的情况下,丢弃当前接收的该第一数据流的编码分组,该门限值为第一缓存队列允许的最大占用率,通过对该第一数据流进行主动丢包能够在一定程度上减小网络拥塞。该第一数据流的编码形式为喷泉码,而基于喷泉码的数据传输无需重传就能够保证数据传输的可靠性,因此,对该第一数据流进行主动丢包并不会造成该第一数据流的吞吐损失,依然可以保障该第一数据流的可靠传输。本发明实施例并没有使用闭环控制回路,避免了现有方法中存在的反馈控制额外占用网络带宽资源。因此,本发明实施例能够在无需使用闭环控制回路的前提下,保障数据流的可靠传输,并减小网络拥塞,相比于现有技术能够降低实现复杂度。
可选地,在本发明实施例中,该处理模块220还用于,在该第一缓存队列的占用率未超过该门限值的情况下,将该接收模块接收的该编码分组存储到该第一缓存队列中;
该网络设备200还包括第一发送模块,用于向该目的节点发送该第一缓存队列中缓存的该编码分组。
可选地,在本发明实施例中,该接收模块210还用于,接收指示消息,该指示消息为目的节点在基于接收到的编码分组解码出该第一数据流的所有原始分组的情况下发送的,该指示消息用于指示该源节点停止发送该第一数据流,该指示消息大小为1比特;
该网络设备200还包括第二发送模块,用于向该源节点发送该接收模块接收的该指示消息。
可选地,在本发明实施例中,该指示消息还用于指示丢弃该第一数据流,该处理模块220还用于,根据该指示消息,丢弃该第一缓存队列中缓存的该第一数据流的编码分组。
可选地,在本发明实施例中,该数据流还包括编码形式不为喷泉码的第二数据流,该接收模块210还用于,接收该第二数据流的分组;
该处理模块220还用于,将该接收模块接收的该第二数据流的分组存储到第二缓存队列中,该第二缓存队列为该网络设备为该第二数据流分配的缓存队列;
该网络设备200还包括第三发送模块,用于向目的节点发送该第二缓存队列中缓存的该第二数据流的分组。
应理解,根据本发明实施例的网络设备200可对应于本发明实施例的传输数据流的方法中的第一转发设备,并且网络设备200中的各个模块的上述和其它操作和/或功能分别为了实现图2和图3中的各个方法的相应流程,为了简洁,在此不再赘述。
具体地,网络设备200中的处理模块220可以由网络设备200中的处理器或处理器相关组件实现。接收模块210可以由网络设备200中的接收器或接收器的相关组件实现。第一发送模块、第二发送模块与第三发送模块可以由网络设备200中的发送器或发送器的相关组件实现。
如图5所示,本发明实施例还提供了一种网络设备300,该网络设备300用于在源节点和目的节点之间传输数据流,该数据流包括编码形式为喷泉码的第一数据流,该网络设备300包括处理器310,存储器320,接收器340和发送器350,其中,处理器310、存储器320、接收器340和发送器350通过内部通信链路进行通信,该存储器320用于存储指令,该处理器310用于执行该存储器320存储的指令,以控制接收器340接收信号,并控制发送器350发送信号。其中,接收器340用于,接收该源节点或者中间节点发送的编码分组,该编码分组为利用喷泉码技术对该第一数据流的原始分组进行编码之后所得分组,该中间节点位于该源节点与该目的节点之间,用于该源节点与该目的节点之间的数据转发;该处理器310用于,在第一缓存队列的占用率超过门限值的情况下,丢弃该收发器接收的该编码分组,该第一缓存队列为该网络设备为该第一数据流分配的缓存队列,该门限值表示该第一缓存队列允许的最大占用率。
在本发明实施例中,在为第一数据流分配的第一缓存队列的占用率超过门限值的情况下,丢弃当前接收的该第一数据流的编码分组,该门限值为第一缓存队列允许的最大占用率,通过对该第一数据流进行主动丢包能够在一定程度上减小网络拥塞。该第一数据流的编码形式为喷泉码,而基于喷泉码的数据传输无需重传就能够保证数据传输的可靠性,因此,对该第一数据流进行主动丢包并不会造成该第一数据流的吞吐损失,依然可以保障该第一数据流的可靠传输。本发明实施例并没有使用闭环控制回路,避免了现有方法中存在的反馈控制额外占用网络带宽资源。因此,本发明实施例能够在无需使用闭环控制回路的前提下,保障数据流的可靠传输,并减小网络拥塞,相比于现有技术能够降低实现复杂度。
可选地,在本发明实施例中,该处理器310还用于,在该第一缓存队列的占用率未超过该门限值的情况下,将该收发器接收的该编码分组存储到该第一缓存队列中;该发送器350用于,向目的节点发送该第一缓存队列中缓存的该编码分组。
可选地,在本发明实施例中,该接收器340还用于,接收指示消息,该指示消息为目的节点在基于接收到的编码分组解码出该第一数据流的所有原始分组的情况下发送的,该指示消息用于指示该源节点停止发送该第一数据流,该指示消息大小为1比特;该发送器350还用于,向该源节点发送该指示消息。
可选地,在本发明实施例中,该指示消息还用于指示丢弃该第一数据流,该处理器310还用于,根据该接收器340接收的该指示消息,丢弃该第一缓存队列中缓存的该第一数据流的编码分组。
可选地,在本发明实施例中,该数据流还包括编码形式不为喷泉码的第二数据流,
该接收器340还用于,接收该第二数据流的分组;
该处理器310用于,将该收发器接收的该第二数据流的分组存储到第二缓存队列中,该第二缓存队列为该网络设备为该第二数据流分配的缓存队列;
该发送器350还用于,向目的节点发送该第二缓存队列中缓存的该第二数据流的分组。
应理解,在本发明实施例中,该处理器310可以是中央处理单元(Central Processing Unit,简称为“CPU”),该处理器310还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器320可以包括只读存储器和随机存取存储器,并向处理器310提供指令和数据。存储器320的一部分还可以包括非易失性随机存取存储器。例如,存储器320还可以存储设备类型的信息。
在实现过程中,上述方法的各步骤可以通过处理器310中的硬件的集成逻辑电路或者软件形式的指令完成。结合本发明实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器320,处理器310读取存储器320中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应理解,发送器350可以是用于实现发送功能的硬件电路或器件,如天线、网卡等;同样的,接收器340也可以是用于实现发送功能的硬件电路或器件,如天线、网卡等,本发明实施例不做限定。
还应理解,接收器340与发送器350可以由一个具备收发功能的装置实现,例如收发器,具体地,如天线。
应理解,根据本发明实施例的网络设备300可对应于本发明实施例的传输数据流的方法中的第一转发设备,以及可以对应于根据本发明实施例的网络设备200,并且网络设备300中的各个模块的上述和其它操作和/或功能分别为了实现图2和图3中的各个方法的相应流程,为了简洁,在此不再赘述。
图6示出了根据本发明实施例提供的多路径传输系统400的示意性框图,该多路径传输系统400包括:发送设备410,接收设备420和网络设备430,该发送设备410与接收设备420存在多条路径,网络设备430为多条路径中的转发设备,该网络设备430对应于本发明实施例的传输数据流的方法中的第一转发设备,该网络设备430还对应于本发明实施例的网络设备200或网络设备300。
在本发明实施例中,在为第一数据流分配的第一缓存队列的占用率超过门限值的情况下,丢弃当前接收的该第一数据流的编码分组,该门限值为第一缓存队列允许的最大占用率,通过对该第一数据流进行主动丢包能够在一定程度上减小网络拥塞。该第一数据流的编码形式为喷泉码,而基于喷泉码的数据传输无需重传就能够保证数据传输的可靠性,因此,对该第一数据流进行主动丢包并不会造成该第一数据流的吞吐损失,依然可以保障该第一数据流的可靠传输。本发明实施例并没有使用闭环控制回路,避免了现有方法中存在的反馈控制额外占用网络带宽资源。因此,本发明实施例能够在无需使用
闭环控制回路的前提下,保障数据流的可靠传输,并减小网络拥塞,相比于现有技术能够降低实现复杂度。
应理解,作为示例而非限定,上述以数据中心网络为例描述了本发明实施例的应用场景。本发明实施例还可以应用于终端云通信中使用WiFi或长期演进(Long Term Evolution,LTE)的存在多条物理路径的通信场景中,本发明实施例对此不作限定。
还应理解,本文中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本发明实施例的范围。
还应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围
为准。
Claims (15)
- 一种传输数据流的方法,所述方法通过中间节点在源节点与目的节点之间传输所述数据流,其特征在于,所述数据流包括编码形式为喷泉码的第一数据流,所述方法包括:第一中间节点接收所述源节点或者至少一个第二中间节点发送的编码分组,所述编码分组为利用喷泉码技术对所述第一数据流的原始分组进行编码之后所得分组;在第一缓存队列的占用率超过门限值的情况下,所述第一中间节点丢弃所述编码分组,所述第一缓存队列为所述第一中间节点为所述第一数据流分配的缓存队列,所述门限值表示所述第一缓存队列允许的最大占用率。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:在所述第一缓存队列的占用率未超过所述门限值的情况下,所述第一中间节点将所述编码分组存储到所述第一缓存队列中;所述第一中间节点向所述目的节点发送所述第一缓存队列中缓存的所述编码分组。
- 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:所述第一中间节点接收指示消息,所述指示消息为所述目的节点在基于接收到的编码分组解码出所述第一数据流的所有原始分组的情况下发送的,所述指示消息用于指示所述源节点停止发送所述第一数据流,所述指示消息大小为1比特;所述第一中间节点向所述源节点发送所述指示消息。
- 根据权利要求3所述的方法,其特征在于,所述指示消息还用于指示丢弃所述第一数据流,所述方法还包括:所述第一中间节点根据所述指示消息,丢弃所述第一缓存队列中缓存的所述第一数据流的编码分组。
- 根据权利要求1至4中任一项所述的方法,其特征在于,所述数据流还包括编码形式不为喷泉码的第二数据流,所述方法还包括:所述第一中间节点接收所述第二数据流的分组;所述第一中间节点将所述第二数据流的分组存储到第二缓存队列中,所述第二缓存队列为所述第一中间节点为所述第二数据流分配的缓存队列;所述第一中间节点向目的节点发送所述第二缓存队列中缓存的所述第二数据流的分组。
- 一种网络设备,所述网络设备用于在源节点和目的节点之间传输数据流,其特征在于,所述数据流包括编码形式为喷泉码的第一数据流,所述网络设备包括:接收模块,用于接收所述源节点或者中间节点发送的编码分组,所述编码分组为利用喷泉码技术对所述第一数据流的原始分组进行编码之后所得分组,所述中间节点位于所述源节点与所述目的节点之间,用于所述源节点与所述目的节点之间的数据转发;处理模块,用于在第一缓存队列的占用率超过门限值的情况下,丢弃所述接收模块接收的所述编码分组,所述第一缓存队列为所述网络设备为所述第一数据流分配的缓存队列,所述门限值表示所述第一缓存队列允许的最大占用率。
- 根据权利要求6所述的网络设备,其特征在于,所述处理模块还用于,在所述第一缓存队列的占用率未超过所述门限值的情况下,将所述接收模块接收的所述编码分组存储到所述第一缓存队列中;所述网络设备还包括第一发送模块,用于向所述目的节点发送所述第一缓存队列中缓存的所述编码分组。
- 根据权利要求6或7所述的网络设备,其特征在于,所述接收模块还用于,接收指示消息,所述指示消息为目的节点在基于接收到的编码分组解码出所述第一数据流的所有原始分组的情况下发送的,所述指示消息用于指示所述源节点停止发送所述第一数据流,所述指示消息大小为1比特;所述网络设备还包括第二发送模块,用于向所述源节点发送所述接收模块接收的所述指示消息。
- 根据权利要求8所述的网络设备,其特征在于,所述指示消息还用于指示丢弃所述第一数据流,所述处理模块还用于,根据所述指示消息,丢弃所述第一缓存队列中缓存的所述第一数据流的编码分组。
- 根据权利要求6至9中任一项所述的网络设备,其特征在于,所述数据流还包括编码形式不为喷泉码的第二数据流,所述接收模块还用于,接收所述第二数据流的分组;所述处理模块还用于,将所述接收模块接收的所述第二数据流的分组存储到第二缓存队列中,所述第二缓存队列为所述网络设备为所述第二数据流分配的缓存队列;所述网络设备还包括第三发送模块,用于向所述目的节点发送所述第二缓存队列中缓存的所述第二数据流的分组。
- 一种网络设备,所述网络设备用于在源节点和目的节点之间传输数据流,其特征在于,所述数据流包括编码形式为喷泉码的第一数据流,所述网络设备包括处理器、存储器和收发器,所述存储器用于存储指令,所述处理器用于执行所述存储器存储的指令,并且对所述存储器中存储的指令的执行使得所述处理器能够控制所述收发器接收信号或者发送信号,还能够处理所述收发器接收到的信号,所述收发器用于,接收所述源节点或者中间节点发送的编码分组,所述编码分组为利用喷泉码技术对所述第一数据流的原始分组进行编码之后所得分组,所述中间节点位于所述源节点和目的节点之间,用于所述源节点和目的节点之间的数据转发;所述处理器用于,在第一缓存队列的占用率超过门限值的情况下,丢弃所述收发器接收的所述编码分组,所述第一缓存队列为所述网络设备为所述第一数据流分配的缓存队列,所述门限值表示所述第一缓存队列允许的最大占用率。
- 根据权利要求11所述的网络设备,其特征在于,所述处理器还用于,在所述第一缓存队列的占用率未超过所述门限值的情况下,将所述收发器接收的所述编码分组存储到所述第一缓存队列中;所述收发器用于,向所述目的节点发送所述第一缓存队列中缓存的所述编码分组。
- 根据权利要求11或12所述的网络设备,其特征在于,所述收发器还用于,接收指示消息,所述指示消息为目的节点在基于接收到的编码分组解码出所述第一数据流的所有原始分组的情况下发送的,所述指示消息用于指示所述源节点停止发送所述第一数据流,所述指示消息大小为1比特;所述收发器还用于,向所述源节点发送所述指示消息。
- 根据权利要求13所述的网络设备,其特征在于,所述指示消息还用于指示丢弃所述第一数据流,所述处理器还用于,根据所述收发器接收的所述指示消息,丢弃所 述第一缓存队列中缓存的所述第一数据流的编码分组。
- 根据权利要求11至14中任一项所述的网络设备,其特征在于,所述数据流还包括编码形式不为喷泉码的第二数据流,所述收发器还用于,接收所述第二数据流的分组;所述处理器用于,将所述收发器接收的所述第二数据流的分组存储到第二缓存队列中,所述第二缓存队列为所述网络设备为所述第二数据流分配的缓存队列;所述收发器还用于,向所述目的节点发送所述第二缓存队列中缓存的所述第二数据流的分组。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP17809537.8A EP3457643B1 (en) | 2016-06-07 | 2017-02-22 | Method and device for transmitting data stream |
US16/209,699 US20190109787A1 (en) | 2016-06-07 | 2018-12-04 | Method for transmitting data streams, and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610404532.0A CN107483349A (zh) | 2016-06-07 | 2016-06-07 | 传输数据流的方法与设备 |
CN201610404532.0 | 2016-06-07 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/209,699 Continuation US20190109787A1 (en) | 2016-06-07 | 2018-12-04 | Method for transmitting data streams, and device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017211096A1 true WO2017211096A1 (zh) | 2017-12-14 |
Family
ID=60577533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/074329 WO2017211096A1 (zh) | 2016-06-07 | 2017-02-22 | 传输数据流的方法与设备 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20190109787A1 (zh) |
EP (1) | EP3457643B1 (zh) |
CN (1) | CN107483349A (zh) |
WO (1) | WO2017211096A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109889447A (zh) * | 2019-01-08 | 2019-06-14 | 北京全路通信信号研究设计院集团有限公司 | 一种基于混合环组网和喷泉码的网络传输方法及系统 |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019222472A1 (en) * | 2018-05-16 | 2019-11-21 | Code On Network Coding, Llc | Multipath coding apparatus and related techniques |
WO2019232760A1 (zh) * | 2018-06-07 | 2019-12-12 | 华为技术有限公司 | 一种数据交换方法、数据交换节点及数据中心网络 |
JP7251075B2 (ja) * | 2018-09-03 | 2023-04-04 | 株式会社オートネットワーク技術研究所 | 中継装置、中継方法及びコンピュータプログラム |
US11350142B2 (en) * | 2019-01-04 | 2022-05-31 | Gainspan Corporation | Intelligent video frame dropping for improved digital video flow control over a crowded wireless network |
US10785098B1 (en) * | 2019-04-30 | 2020-09-22 | Alibaba Group Holding Limited | Network configuration using multicast address modulation |
CN110838987B (zh) * | 2019-10-08 | 2022-07-05 | 福建天泉教育科技有限公司 | 队列限流方法、存储介质 |
CN112737940B (zh) * | 2019-10-28 | 2023-12-08 | 华为技术有限公司 | 一种数据传输的方法和装置 |
CN111328148B (zh) * | 2020-03-11 | 2023-04-07 | 展讯通信(上海)有限公司 | 数据传输方法及装置 |
US11943825B2 (en) * | 2020-07-09 | 2024-03-26 | Qualcomm Incorporated | Feedback-based broadcasting of network coded packets with sidelink |
CN112039803B (zh) * | 2020-09-10 | 2022-09-06 | 中国舰船研究设计中心 | 一种在时间触发网络中的数据传输方法 |
TWI763261B (zh) * | 2021-01-19 | 2022-05-01 | 瑞昱半導體股份有限公司 | 數據流分類裝置 |
CN115190080A (zh) * | 2021-04-02 | 2022-10-14 | 维沃移动通信有限公司 | 拥塞控制方法、装置及通信设备 |
US12056375B2 (en) * | 2022-09-06 | 2024-08-06 | Micron Technology, Inc. | Port arbitration |
CN115913461B (zh) * | 2022-10-28 | 2025-01-10 | 上海交通大学 | 基于喷泉码的多径并发传输系统的流量调度方法 |
CN118138533A (zh) * | 2022-12-01 | 2024-06-04 | 华为技术有限公司 | 一种数据传输方法和节点 |
CN116192341B (zh) * | 2023-02-27 | 2024-04-26 | 东方空间技术(山东)有限公司 | 运载火箭遥测系统pcm/fm码流传输方法 |
CN118041773B (zh) * | 2024-03-06 | 2024-11-22 | 书行科技(北京)有限公司 | 媒体数据部署方法、装置、计算机设备和存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101582842A (zh) * | 2008-05-16 | 2009-11-18 | 华为技术有限公司 | 拥塞控制方法与拥塞控制装置 |
CN102404077A (zh) * | 2011-11-30 | 2012-04-04 | 清华大学 | 基于喷泉码的多径tcp协议 |
CN103229443A (zh) * | 2012-12-26 | 2013-07-31 | 华为技术有限公司 | 一种喷泉编码的中继方法和设备 |
CN104184670A (zh) * | 2013-05-23 | 2014-12-03 | 广州思唯奇计算机科技有限公司 | 一种智能变电站中异常报文的隔离方法和装置 |
US20150334712A1 (en) * | 2014-05-16 | 2015-11-19 | Huawei Technologies Co., Ltd. | System and Method for Joint Transmission over Licensed and Unlicensed Bands using Fountain Codes |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101969668A (zh) * | 2010-10-24 | 2011-02-09 | 天津大学 | 一种用于无线协作中继系统的数据传输方法 |
CN105100008B (zh) * | 2014-05-09 | 2018-06-05 | 华为技术有限公司 | 内容中心网络中内容分发的方法及相关设备 |
-
2016
- 2016-06-07 CN CN201610404532.0A patent/CN107483349A/zh active Pending
-
2017
- 2017-02-22 EP EP17809537.8A patent/EP3457643B1/en active Active
- 2017-02-22 WO PCT/CN2017/074329 patent/WO2017211096A1/zh unknown
-
2018
- 2018-12-04 US US16/209,699 patent/US20190109787A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101582842A (zh) * | 2008-05-16 | 2009-11-18 | 华为技术有限公司 | 拥塞控制方法与拥塞控制装置 |
CN102404077A (zh) * | 2011-11-30 | 2012-04-04 | 清华大学 | 基于喷泉码的多径tcp协议 |
CN103229443A (zh) * | 2012-12-26 | 2013-07-31 | 华为技术有限公司 | 一种喷泉编码的中继方法和设备 |
CN104184670A (zh) * | 2013-05-23 | 2014-12-03 | 广州思唯奇计算机科技有限公司 | 一种智能变电站中异常报文的隔离方法和装置 |
US20150334712A1 (en) * | 2014-05-16 | 2015-11-19 | Huawei Technologies Co., Ltd. | System and Method for Joint Transmission over Licensed and Unlicensed Bands using Fountain Codes |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109889447A (zh) * | 2019-01-08 | 2019-06-14 | 北京全路通信信号研究设计院集团有限公司 | 一种基于混合环组网和喷泉码的网络传输方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
EP3457643A1 (en) | 2019-03-20 |
US20190109787A1 (en) | 2019-04-11 |
EP3457643A4 (en) | 2019-03-20 |
EP3457643B1 (en) | 2020-09-09 |
CN107483349A (zh) | 2017-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017211096A1 (zh) | 传输数据流的方法与设备 | |
US11804940B2 (en) | Resources selection for feedback based NR-V2X communication | |
CN111740808B (zh) | 一种数据传输方法及装置 | |
US11018701B2 (en) | Reliable data transmission method based on reliable UDP and fountain code in aeronautical ad hoc networks | |
US8432848B2 (en) | Queued cooperative wireless networks configuration using rateless codes | |
WO2017161999A1 (zh) | 一种报文处理的方法及相关设备 | |
KR102328615B1 (ko) | 다중 경로 송신 제어 프로토콜 연결을 사용하여 데이터를 전송하는 장치 및 방법 | |
US20160013857A9 (en) | Communication method for relay node and next node of the relay node for network coding | |
CN110391879B (zh) | 数据传输网络的丢包恢复方法、装置和计算机设备 | |
WO2020078448A1 (zh) | 一种报文处理方法和装置 | |
CN103906165B (zh) | 一种基于编码感知的在线机会式路由方法 | |
US10461886B2 (en) | Transport layer identifying failure cause and mitigation for deterministic transport across multiple deterministic data links | |
Dong et al. | In-packet network coding for effective packet wash and packet enrichment | |
Luo et al. | FRUDP: A reliable data transport protocol for aeronautical ad hoc networks | |
CN118301078A (zh) | 流速控制方法和装置 | |
US20230163875A1 (en) | Method and apparatus for packet wash in networks | |
CN114979839A (zh) | 一种传输控制协议代理方法及通信装置 | |
US10299167B2 (en) | System and method for managing data transfer between two different data stream protocols | |
KR102115401B1 (ko) | 네트워크 코딩을 지원하는 시스템에서 패킷을 관리하는 방법 및 장치 | |
WO2020163124A1 (en) | In-packet network coding | |
ES2735800T3 (es) | Procedimiento y aparato para establecer modo de transmisión de paquetes | |
US9525629B2 (en) | Method and apparatus for transmitting data packets | |
CN106358215A (zh) | 基于数据缓存的中继网络下的协作方法 | |
WO2020029697A1 (zh) | 业务冲突处理方法、用户终端及计算机可读存储介质 | |
Van Vu et al. | Adaptive redundancy control with network coding in multi-hop wireless networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17809537 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2017809537 Country of ref document: EP Effective date: 20181211 |