WO2020163124A1 - In-packet network coding - Google Patents

In-packet network coding Download PDF

Info

Publication number
WO2020163124A1
WO2020163124A1 PCT/US2020/015538 US2020015538W WO2020163124A1 WO 2020163124 A1 WO2020163124 A1 WO 2020163124A1 US 2020015538 W US2020015538 W US 2020015538W WO 2020163124 A1 WO2020163124 A1 WO 2020163124A1
Authority
WO
WIPO (PCT)
Prior art keywords
data packet
payload
packet
linear
coded
Prior art date
Application number
PCT/US2020/015538
Other languages
French (fr)
Inventor
Lijun Dong
Original Assignee
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurewei Technologies, Inc. filed Critical Futurewei Technologies, Inc.
Publication of WO2020163124A1 publication Critical patent/WO2020163124A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0076Distributed coding, e.g. network coding, involving channel coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0041Arrangements at the transmitter end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0057Block codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/12Arrangements for detecting or preventing errors in the information received by using return channel
    • H04L1/16Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
    • H04L1/18Automatic repetition systems, e.g. Van Duuren systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks

Definitions

  • the present application relates to network communication, and more specifically to in packet coding to enable effective packet wash and packet enrichment.
  • a first general aspect relates to a computer-implemented method that is performed by a source node for communicating data packets.
  • the method includes dividing a maximum payload size of a payload of a first data packet into a plurality of payload blocks having a same coded block size.
  • the method performs linear network coding on the plurality of payload blocks of the first data packet.
  • the method inserts metadata into the first data packet.
  • the metadata may include a coefficient for each independent linear coded payload block in the first data packet, a unique packet ID of the first data packet, an indication that the first data packet is a linear network coded packet type, and the coded block size.
  • the method then transmits the first data packet towards a destination node.
  • the method further includes receiving an acknowledgement packet corresponding to the first data packet that was transmitted towards the destination node.
  • the method determines from metadata in the acknowledgement packet whether the destination node received all of the payload blocks of the first data packet. If the destination node received all of the payload blocks of the first data packet, the method inserts a second plurality of independent linear coded payload blocks into a second data packet. If the destination node did not received all of the payload blocks of the first data packet, the method inserts new linear coded blocks that have coefficients that are orthogonal to the linear coded blocks received by the destination node in the first data packet. The method inserts metadata in the second data packet.
  • the metadata includes a coefficient for each independent linear coded payload block in the second data packet, a unique packet identifier (ID) of the second data packet, an indication that the second data packet is the linear network coded packet type, and the coded block size.
  • the method then transmits the second data packet towards the destination node.
  • the method further includes inserting a portion of the second plurality of independent linear coded payload blocks along with the new linear coded blocks that have coefficients that are orthogonal to the linear coded blocks received by the destination node into the second data packet up to a maximum payload size of the second data packet.
  • the second data packet is transmitted only after receiving the acknowledgement packet corresponding to the first data packet.
  • the metadata is inserted into a BPP metadata header.
  • the new linear coded blocks are the linear coded blocks that were not received by the destination node in the first data packet.
  • a second general aspect relates to a computer-implemented method that is performed by a network node for communicating data packets.
  • the method includes receiving a data packet that is to be forwarded towards a destination node.
  • the method determines whether a network condition exists that requires dropping the data packet. If the network condition exists that requires dropping the data packet, the method determines whether the data packet is a linear network coded packet type. If the data packet is not a linear network coded packet type, the method drops the data packet. If the data packet is a linear network coded packet type, the method drops only a portion of the payload blocks of the first data packet, and removes coefficients in a BPP metadata header of the data packet that corresponds to the portion of the plurality of payload blocks dropped from the data packet. The method then forwards the data packet towards the destination node.
  • the method further includes caching the portion of the plurality of payload blocks dropped from the data packet, the coefficients corresponds to the portion of the plurality of payload blocks dropped from the data packet, and a packet ID of the data packet.
  • the method further includes determining whether a payload of the data packet is full based on a rank of the data packet if the network condition does not require dropping the data packet. The method determines whether there are cached payload blocks belonging to a same flow as the data packet if the payload of the data packet is not full.
  • the method inserts the cached payload blocks into the payload of the data packet up to a maximum payload size of the data packet, adds coefficients corresponds to the cached payload blocks inserted into the data packet, and increases the rank of the data packet to account for the inserted cached payload blocks.
  • a third general aspect relates to a computer-implemented method that is performed by a network node for communicating data packets.
  • the method includes receiving an acknowledgement packet to be forwarded towards a source node.
  • the method determines whether the acknowledgement packet indicates that a first data packet received by a destination node of the data packet was missing a portion of a payload of the first data packet. If the acknowledgement packet indicates that the first data packet received by the destination node was missing a portion of the payload of the first data packet, the method determines whether the network node has cached data that contributes to decoding of the payload of the first data packet.
  • the method transmits a second data packet towards the destination node comprising the cached data that contributes to decoding of the payload of the first data packet.
  • the method updates the acknowledgement packet to account for the cached data transmitted by the network node.
  • the method forwards the acknowledgement packet towards the source node.
  • the cached data contains linear coded blocks that were not received by the destination node in the first data packet.
  • the cached data contains new linear coded blocks that have coefficients that are orthogonal to linear coded blocks received by the destination node in the first data packet.
  • a fourth general aspect relates to a computer-implemented method that is performed by a destination node for communicating data packets.
  • the method includes receiving a data packet intended for the destination node.
  • the method determines whether data packet is a linear network coded packet type. If the data packet is a linear network coded packet type, the method determines whether a full payload of the data packet is received by the destination node based on a current rank of the data packet. If the full payload of the data packet is received by the destination node, the method decodes the full payload of the data packet, and sends an acknowledgement packet for a next packet in the flow towards a source node.
  • the method sends an acknowledgement packet towards the source node that includes a packet ID, coefficients of received payload blocks in the data packet, the current rank, and a full rank; and waits for additional payload blocks that contribute to decoding the payload of the data packet.
  • any one of the foregoing embodiments may be combined with any one or more of the other foregoing embodiments to create a new embodiment within the scope of the present disclosure.
  • FIG. 1 is a schematic diagram illustrating a data packet transmission being dropped due to a packet error.
  • FIG. 2 is a schematic diagram illustrating a data packet transmission being dropped due to a network condition.
  • FIG. 3 is a schematic diagram illustrating a data packet transmission in accordance with an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram that illustrates linear network coding inside a data packet in accordance with an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram that illustrates a Big Packet Protocol (BPP) metadata block inside a data packet in accordance with an embodiment of the present disclosure.
  • BPP Big Packet Protocol
  • FIG. 6 is a schematic diagram that illustrates linear network coding inside a data packet in accordance with an alternative embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram that illustrates a BPP metadata block inside a data packet in accordance with an alternative embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram that illustrates BPP metadata in an acknowledgement packet in accordance with an alternative embodiment of the present disclosure.
  • FIG. 9 is a flowchart illustrating a process for performing an in-network packet wash procedure in accordance with an embodiment of the present disclosure.
  • FIG. 10 is a flowchart illustrating a process for processing a data packet in accordance with an embodiment of the present disclosure.
  • FIG. 11 is a flowchart illustrating a process for performing in-network acknowledgement processing in accordance with an embodiment of the present disclosure.
  • FIG. 12 is a flowchart illustrating a process for performing acknowledgement processing at a source node in accordance with an embodiment of the present disclosure.
  • FIG. 13 is a flowchart illustrating a process for performing acknowledgement processing at a source node in accordance with an embodiment of the present disclosure.
  • FIG. 14 is a schematic diagram illustrating a network element according to an embodiment of the present disclosure. DETAILED DESCRIPTION
  • linear network coding is performed to create data packets having a plurality of linear coded blocks in its payload.
  • an intermediate router can simply remove one or more coded blocks from the packet and forward the remainder of the packet.
  • any coded blocks that are retained in the packet can be cached by the receiver.
  • the retained coded blocks are useful for future decoding of the original payload after enough degrees of freedom are received.
  • the receiver can request that the sender send more coded blocks to compensate for the missing degrees of freedom. Moreover, the sender does not need to know which coded blocks are lost in the way of transport.
  • the network avoids dropping a whole packet, eliminates no transport layer time-out, and avoids interruption to the transmission session to re-transmit a packet.
  • FIG. 1 is a schematic diagram illustrating a data packet transmission being dropped due to a packet error.
  • a data packet 102 is being transmitted from a source node 104 to a destination node 106 over a network.
  • the network can comprise one more wired or wireless/mobile networks.
  • the data packet 102 is transmitted along a path that includes one or more network routers 110 that forwards the data packet 102 towards the destination node 106.
  • the destination node 106 can be any type of device that is capable of requesting the data packet 102.
  • the data packet 102 will be completely dropped at a network router 110 when an error occurs during the packet transmission or propagation, which is not correctable by using any means (e.g., cyclic redundancy check (CRC)).
  • CRC cyclic redundancy check
  • the network router 110 drops the data packet 102 and sends a request to the source node 104 to retransmit the data packet 102.
  • a policy may be adopted for the existing packets or the newly arriving packets at a network router 110 to be buffered and wait in the queue. However, if the buffer of the network router 110 is full, one or more of the data packets 102 are dropped. The network router 110 then sends a request to the source node 104 to retransmit the data packet 102. Other conditions may also cause the data packet 102 to be dropped including, but not limited to, when the size of the data packet 102 is bigger than maximum transmission unit (MTU) of the network router 110.
  • MTU maximum transmission unit
  • FIG. 3 is a schematic diagram illustrating a data packet transmission in accordance with an embodiment of the present disclosure.
  • the source node 104 applies linear network coding to the packet payload inside of the data packet 102 to enable packet wash to be performed, if needed, by a network router 110 inside the network.
  • a packet wash is an operation that reduces the size of a payload of the data packet 102 while attempting to maintain as much of the content as possible. Packet washing may be necessary if a network is congested, an error occurs, or the packet exceeds an MTU of a network node.
  • the network node is able to forward some portion of a data packet 102 without having to drop the entire data packet 102.
  • Linear network coding is a networking technique in which transmitted data is encoded and decoded to increase network throughput, reduce delays, and make the network more robust.
  • Linear network coding generates new packets which are linear combinations of earlier received packets.
  • the bits in the information flow do not have to be delivered as complete data packets 102 to a receiving host. Instead, the bits in the information flow can be mixed. The receiving host only requires sufficient information to reconstruct the original packets from the source node 104.
  • the linear network coding is formulated as:
  • x l x 2 , x 3 , ... , X h are the original source packets.
  • Node t can recover the source symbols xl, . . . , Xh as long as the matrix G, formed by the global encoding vectors, has (full) rank h as shown below:
  • a network router 110 performs a packet wash procedure on the data packet 102 to reduce the payload of the data packet 102
  • the receiving node can send an acknowledgement packet back to the source node 104.
  • the acknowledgement packet can include metadata specifying the coded blocks that were missing from the payload of the data packet 102.
  • the metadata can specify the coded blocks in the payload of the data packet 102 that were received at the receiving node.
  • the retransmission may not be needed if the receiving node has the capability or intelligence to deduce the entire information from what is left in the data packet 102 after a packet wash procedure is performed on the original packet by the network routers 110.
  • Advantages of the disclosed embodiments include reducing network resource usage, better prioritization of network resources, and reducing latency of transmitting the packet due to no retransmission or due to retransmission of a smaller packet size after a packet wash procedure.
  • FIG. 4 is a schematic diagram that illustrates linear network coding inside a data packet 400 in accordance with an embodiment of the present disclosure.
  • the data packet 400 can be an example of the data packet 102 that is originated from a source node, such as the source node 104 in FIG. 1.
  • the data packet 400 includes a header 402, a BPP block 404, and packet payload 406.
  • the header 402 can contain data that specifies the type of data packet, a source Internet protocol (IP) address, and a destination IP address.
  • IP Internet protocol
  • the BPP block 404 can contain directives that provide guidance for how the packet should be processed or what resources must be allocated for a flow, as well as metadata about the packet and the flow that the packet is a part of.
  • the BPP block 404 can include a BPP command block and a BPP metadata block.
  • Commands can include, but are not limited to, determining conditions when to drop a packet, which queue to use, what resources to allocate, or when and how to perform a packet wash procedure.
  • the BPP metadata block can be used to carry information about the packet, such as, but not limited to, information regarding the packet payload 406.
  • the packet payload 406 is linear network coded into (h) number of coded blocks with the same block size.
  • the source node 104 will include the same number of independent coded blocks 408 in the data packet 400, which will have a full rank equal to h.
  • FIG. 5 is a schematic diagram that illustrates a BPP metadata block 500 inside the BPP block 404 of the data packet 400 in accordance with an embodiment of the present disclosure.
  • the BPP metadata block 500 includes a packet identifier (ID) and block number 502, an indication of linear coding inside of payload 504, coded block size 506, coefficients 508, current rank 510, and full rank 512.
  • the coded block size 506 is the size of each of the coded blocks 408 in the payload 406 of the data packet 400.
  • the coded block size 506 can be used by a network node to find the chunk/coded block delineator when packet wash is performed.
  • the coded blocks 408 in the payload 406 of the data packet 400 are equal in significance. In other embodiments, the coded blocks 408 in the payload 406 of the data packet 400 may be prioritized, and lower priority coded blocks are removed before higher coded blocks when packet wash is performed.
  • the coefficients 508 are data that is used to identify and decode the coded blocks 408 in the payload 406.
  • the current rank 510 is the number of coded blocks 408 currently in the payload 406 of the data packet 400.
  • the full rank 512 is the number of coded blocks 408 that can be contained in a full payload 406 of the data packet 400. The current rank 510 is adjusted when packet wash is performed on the data packet 400.
  • a packet wash procedure is performed to drop some coded chunks (based on the chunk size) from the payload 406 of the data packet 400 as necessary.
  • the current rank 510 is decreased by one for each coded block 408 that is removed from the payload of the data packet 400.
  • the coefficients 508 and the full rank 512 are not changed.
  • the receiver determines that packet wash occurred in the network.
  • the receiver will cache the coefficients 508 related to the remaining coded chunks, as well as the coded chunks for future decoding.
  • the receiver will include the number of rank shortage in an acknowledgement packet and request that the sender send additional coded chunks based on this information.
  • FIG. 6 is a schematic diagram that illustrates linear network coding inside a data packet 600 in accordance with an alternative embodiment of the present disclosure.
  • the data packet 600 is similar to the data packet 400, except that the coefficients 508 that are used to identify and decode the coded blocks 408 in the payload 406 are part of the payload 406 of the data packet 600 instead of being in the BPP metadata block 500 as shown in FIG. 5.
  • FIG. 7 is a schematic diagram that illustrates a BPP metadata block 700 inside the data packet 400 in accordance with an alternative embodiment of the present disclosure.
  • the BPP metadata block 700 includes the indication of linear coding inside of payload 504, the coded block size 506, and the current rank 510 as described in FIG. 5.
  • the receiver can determine that packet wash occurred in the network based on the coefficients 508 included in the payload 406 as described in FIG. 6.
  • the alternative embodiments of FIG. 6 and FIG. 7 can reduce the overhead of the data packet 600 and increase the overall transmission speed of the data packet 600.
  • FIG. 8 is a schematic diagram that illustrates BPP metadata 800 in an acknowledgement packet in accordance with an alternative embodiment of the present disclosure.
  • the receiver will cache the coefficients related to the remaining coded chunks, as well as the coded chunks for future decoding.
  • the receiver will send an acknowledgement packet that includes the BPP metadata 800 and request that the sender send additional coded chunks based on this information.
  • the BPP metadata 800 includes a packet ID 802, a coded block size 804, coefficients 806, and rank short 808.
  • the packet ID 802 is an identifier that identifies the data packet that was received by the destination device.
  • the coded block size 804 indicates the coded block size received by the receiver in the data packet.
  • the coefficients 806 specify the coefficients in the data packet received by the destination device.
  • the rank short 808 indicates the number of coded blocks that were removed from the data packet received by the destination device.
  • FIG. 9 is a flowchart illustrating a process 900 for performing an in-network packet wash procedure in accordance with an embodiment of the present disclosure.
  • the process 900 can be performed by any in-network node such as a router, switch, or any other network device.
  • the process 900 can be performed by the network routers 110 in FIGS. 1-3.
  • a coded packet is sent from a source (e.g., the source node 104 in FIG. 1)
  • the process 900 receives the coded packet at an in-network node, such as a BPP router, for forwarding towards a destination node.
  • the process 900 determines whether any condition occurs that requires dropping the packet.
  • Non-limiting conditions that may require that the data packet be dropped include packet error, network congestion, buffer full, and a packet size that exceeds a MTU of the in-network node. If there are any conditions that require dropping the data packet, then the process 900, at step 906, determines whether the data packet includes a linear coded payload. This determination may be made based on the BPP metadata as described in FIG. 5 and FIG. 7. In an embodiment, if the process 900 determines that the data packet does not include a linear coded payload, the process 900, at step 916, drops the entire data packet, with process 900 terminating thereafter.
  • the process 900 determines that the data packet includes a linear coded payload, the process 900, at step 908, drops one or more of the coded blocks from the payload of the data packet as necessary to be able to forward the remaining payload towards the destination node.
  • the process 900 removes the corresponding coefficients of the dropped blocks.
  • the coefficients can be either part of the payload of the data packet or part of the BPP metadata included in the data packet.
  • the process 900 at step 912, caches the coded blocks that are removed from the data packet, the corresponding coefficients of the removed coded blocks, and the packet ID of the data packet that the coded blocks are removed from.
  • the process 900 forwards the data packet with the remaining payload towards a destination node, with the process 900 terminating thereafter.
  • the process 900 determines whether the payload of the data packet is full by determining whether a current rank specified in the BPP metadata of the data packet is less than a full rank specified in the BPP metadata of the data packet. If the payload of the data packet is full (i.e., current rank equals full rank), then the process 900, at step 914, forwards the data packet with the remaining payload towards the destination node, with the process 900 terminating thereafter.
  • the process 900 determines whether network condition allows more coded blocks to be added to the payload without exceeding an MTU size. If the network condition does not allow more coded blocks to be added to the payload or adding to the payload would exceed the MTU size, the process 900, at step 914, forwards the data packet with the remaining payload towards the destination node, with the process 900 terminating thereafter.
  • the process 900 determines whether there are cached coded blocks that belong to the same packet ID and if there are corresponding coefficients to contribute to the rank. If there are no cached coded blocks that belong to the same packet ID whose coefficient contribute to the rank, the process 900, at step 914, forwards the data packet with the remaining payload towards the destination node, with the process 900 terminating thereafter.
  • the process 900 inserts additional cached coded blocks to the end of the payload, while ensuring that the current rank remains lesser than or equal to the full rank.
  • the process 900 adds the coefficient, corresponding to the additional cached coded blocks that were added to the end of the payload, to the coefficients metadata in the data packet.
  • the process 900 at step 928, increases the current rank to account for the added coded blocks.
  • the process 900 at step 914, forwards the data packet with the updated payload towards the destination node, with the process 900 terminating thereafter.
  • FIG. 10 is a flowchart illustrating a process 1000 for processing a data packet in accordance with an embodiment of the present disclosure.
  • the process 1000 can be performed by any destination node such as the destination node 106 in FIGS. 1-3.
  • the process 1000 begins, at step 1002, by receiving a data packet whose destination is the current/receiving node as specified in a header of the data packet.
  • the process 1000 determines whether BPP metadata contained in the data packet indicates the data packet includes a linear coded payload. If the BPP metadata contained in the data packet does not indicate that the data packet includes a linear coded payload (i.e., the data packet has a regular packet payload), the process 1000, at step 1014, can successfully decode the data packet.
  • the process 1000 requests the next packet in the flow from the source by sending an acknowledge packet to the source indicating that the data packet was successfully decoded at the source, with the process 1000 terminating thereafter. [0062] If, at step 1004, the process 1000 determines that the BPP metadata contained in the data packet indicates that the data packet includes a linear coded payload, then the process 1000, at step 1006, determines whether the current rank specified in the BPP metadata of the data packet is equal to the full rank specified in the BPP metadata of the data packet.
  • the process 1000 determines that the current rank specified in the BPP metadata of the data packet is equal to the full rank specified in the BPP metadata of the data packet, the process 1000, at step 1014, can successfully decode the data packet with the coefficients contained in the data packet.
  • the process 1000 requests the next packet in the flow from the source by sending an acknowledge packet to the source indicating that the data packet was successfully decoded at the source, with the process 1000 terminating thereafter.
  • the process 1000 determines that the current rank specified in the BPP metadata of the data packet is not equal to the full rank specified in the BPP metadata of the data packet (i.e., some of the coded blocks in the payload were removed in network by a packet wash procedure), the data packet cannot be decoded.
  • the process 1000 at step 1008, records the received coded data packet, a packet ID, the coefficients of the coded blocks received in the payload of the data packet, and the current rank and full rank specified in the BPP metadata of the data packet.
  • the process 1000 sends an acknowledgment packet with some or all of the recorded information included in the metadata to the source.
  • the process 1000 waits for the additional coded blocks to decode the entire data packet, with the process 1000 terminating thereafter.
  • the additional coded blocks can be one or more of the coded blocks that were missing/removed from the original data packet and/or coded blocks that have coefficients that are orthogonal to the already received coded blocks.
  • FIG. 11 is a flowchart illustrating a process 1100 for performing in-network acknowledgement processing in accordance with an embodiment of the present disclosure.
  • the process 1100 can be performed by any network node such as the network routers 110 in FIGS. 1- 3.
  • the process 1100 begins, at step 1102, by receiving an acknowledgement packet in route to a source node that includes a rank short that indicates that the data packet received by the destination node was missing one or more linear coded blocks in the payload of the data packet.
  • the acknowledgement packet could include the packet ID, coded block size, the coefficients of the coded blocks received in the payload of the data packet, and a rank short as shown in FIG. 8.
  • the process 1100 determines whether there are cached coded blocks that have the same coded block size that belong to the same packet ID (i.e., same packet flow) that are stored at the receiving network node. If the process 1100 determines that there are no cached coded blocks that have the same coded block size that belong to the same packet ID that are stored at the receiving network node, the process 1100, at step 1118, forwards the acknowledgement packet towards the source node, with the process 1100 terminating thereafter.
  • the process 1100 determines whether there are cached coded blocks that have the same coded block size that belong to the same packet ID (i.e., same packet flow) that are stored at the receiving network node. If the process 1100 determines that there are no cached coded blocks that have the same coded block size that belong to the same packet ID that are stored at the receiving network node, the process 1100, at step 1118, forwards the acknowledgement packet towards the source node, with the process 1100 terminating thereafter.
  • the process 1100 determines whether the coefficient of the cached coded block contributes to the rank.
  • the coefficient of the cached coded block contributes to the rank only if they are orthogonal to the coefficient of the already received coded blocks at the destination node because the destination node can only decode the already received coded blocks with coded blocks that have orthogonal coefficient (e.g., using EQ.2).
  • the process 1100 determines that there are no cached coded blocks that have a coefficient that contributes to the rank, the process 1100, at step 1118, forwards the acknowledgement packet towards the source node, with the process 1100 terminating thereafter. If the process 1100 determines that there are cached coded blocks that have coefficients that contribute to the rank, the process 1100, at step 1108, transmits a data packet that includes the one or more cached coded blocks and the corresponding coefficient that contributes to the rank to the destination node.
  • the process 1100 determines whether the updated rank short equals to zero. If the rank short can be reduced to zero, then, at step 1116, the process 1100 modifies the acknowledgement packet to remove all metadata information and request the next packet in the flow from the source node. The process 1100, at step 1118, forwards the acknowledgement packet that requests the next packet towards the source node, with the process 1100 terminating thereafter.
  • the process 1100 determines that the updated rank short does not equal zero, the process 1100, at step 1114, modifies the acknowledgement packet by adding the coefficient corresponding to the cached coded blocks returned to destination node to the acknowledgement packet. The process 1100, at step 1118, forwards the modified acknowledgement packet towards the source node, with the process 1100 terminating thereafter.
  • FIG. 12 is a flowchart illustrating a process 1200 for performing acknowledgement processing at a source node in accordance with an embodiment of the present disclosure.
  • the process 1200 can be performed by any source node such as the source node 104 in FIGS. 1-3.
  • the process 1200 begins, at step 1202, by receiving at a source node an acknowledgement packet that may include metadata information. If the acknowledgement packet does not include metadata information, then the previous packet was successfully received by the destination. If the acknowledgement packet includes metadata information, the process 1200, at step 1204, determines whether the acknowledgement packet indicates that there is a rank short (i.e., rank short greater than zero) meaning that the destination node received less than the original number of linear coded blocks in the previous data packet.
  • rank short i.e., rank short greater than zero
  • the process 1200 determines whether there is a next data packet in the flow. If there is no more data packet in the flow, the process 1200, at step 1214, ends the flow, with the process 1200 terminating thereafter. If there is a next data packet in the flow, the process 1200, at step 1208, linearly codes the blocks in the next data packet. The process 1200, at step 1216, then adds the metadata information in the BPP block of the data packet, and forwards the data packet towards the destination node, with the process 1200 terminating thereafter.
  • the process 1200 determines that the acknowledgement packet indicates that there is a rank short (i.e., rank short greater than zero)
  • the process 1200 at step 1210, linearly codes a data packet with coded blocks whose coefficients are orthogonal to the coefficient of the already received coded blocks at the destination node.
  • coded blocks could be the missing coded blocks from the previous data packet and/or could be any coded blocks whose coefficients are orthogonal to the coefficient of the already received coded blocks at the destination node to enable the destination node to decode the previously received coded blocks.
  • the data packet only includes these coded blocks in the data packet for transmission to the destination node (i.e., number of coded blocks in the data packet equals the rank short indicated in the acknowledgement packet).
  • the data packet could include these coded blocks and other coded blocks in the data packet (i.e., the number of coded blocks in the data packet are larger to the rank short). For example, if the process 1200, at step 1212, determines that there is a next packet in the flow, the process 1200, at step 1214, can linearly code the additional blocks for the current packet and the one or more blocks of the next packet, and include them in a single data packet.
  • This embodiment can provide an increase in the rate of delivery of the data packets.
  • the process 1200 at step 1216, then adds the metadata information in the BPP block of the data packet, and forwards the data packet towards the destination node, with the process 1200 terminating thereafter.
  • each of them can be divided into 4 blocks.
  • the process 1200 determines that two of the coded blocks were lost in the transmission. Thus, the process 1200 needs to send additional coded blocks of the first packet. In an embodiment, only 2 coded blocks of the first packet need to be sent in the next roundtrip. However, the payload may still have space for two more coded blocks.
  • the source node can code two blocks from the second packet. The chosen coefficient needs to be orthogonal to the following extended matrix in order to decode the 4 coded blocks of the first packet and 2 coded blocks of the second packet:
  • the packet IDs of the packet 1 and packet 2 are included in the metadata. Additionally, the coded block numbers of the four coded blocks in the first packet and the coded block numbers of the first two coded blocks of the second packet blocks are also indicated in the metadata. This embodiment can be adopted when the network condition is good to avoid in- network packet wash on the packet because if some of the coded blocks are dropped in transmission, the destination would not be able to decode the first packet.
  • FIG. 13 is a flowchart illustrating a process 1300 for performing acknowledgement processing at a source node in accordance with an embodiment of the present disclosure.
  • the process 1300 can be performed by any source node such as the source node 104 in FIGS. 1-3.
  • the process 1300 begins, at step 1302, by dividing a maximum payload size of a data packet into a plurality of payload blocks having a same coded block size.
  • the process 1300 at step 1304, performs linear network coding on the plurality of payload blocks of the data packet. Each of the payload blocks inside the data packet is an independent linear coded payload block.
  • the process 1300, at step 1306, inserts metadata into the data packet.
  • the metadata is inserted in a BPP metadata header of the data packet.
  • the metadata includes a coefficient for each independent linear coded payload block in the data packet, a unique packet identifier (ID) of the data packet, an indication that the data packet is a linear network coded packet type, and the coded block size.
  • the process 1300 transmits the data packet towards a destination node.
  • FIG. 14 is a schematic diagram illustrating a network element 1400 according to an embodiment of the present disclosure.
  • the network element 1400 can be any type of network node such as, but not limited to, source node 104, destination node 106, and network router 110 in FIG. 1.
  • the network element 1400 includes receiver units (RX) or receiving means 1420 for receiving data via ingress ports 1410.
  • the network element 1400 also includes transmitter units (TX) or transmitting means 1440 for transmitting via data egress ports 1450.
  • RX receiver units
  • TX transmitter units
  • the network element 1400 includes a memory or data storing means 1460 for storing the instructions and various data.
  • the memory/data storing means 1460 can be any type of or combination of memory components capable of storing data and/or instructions.
  • the memory/data storing means 1460 can include volatile and/or non-volatile memory such as read-only memory (ROM), random access memory (RAM), ternary content-addressable memory (TCAM), and/or static random-access memory (SRAM).
  • the memory/data storing means 1460 can also include one or more disks, tape drives, and solid-state drives.
  • the memory/data storing means 1460 can be used as an over- flow data storage device to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution.
  • the network element 1400 has one or more processor or processing means 1430 (e.g., central processing unit (CPU)) to process instructions.
  • the processor/processing means 1430 may be implemented as one or more CPU chips, cores (e.g., as a multi-core processor), field- programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and digital signal processors (DSPs).
  • the processor/processing means 1430 is communicatively coupled via a system bus with the ingress ports 1410, RX 1420, TX 1440, egress ports 1450, and memory/data storing means 1460.
  • the memory/data storing means 1460 can be memory that is integrated with the processor/processing means 1430.
  • the processor/processing means 1430 can be configured to execute instmctions stored in the memory/data storing means 1460. Thus, the processor/processing means 1430 is able to perform any computational, comparison, determination, coding, configuration, or any other action corresponding to the claims when the appropriate instruction is executed by the processor.
  • the memory/data storing means 1460 can store an in-packet network linear coding module 1470.
  • the in-packet network linear coding module 1470 includes data and executable instructions for implementing the disclosed embodiments. For instance, the in-packet network linear coding module 1470 can include instructions for implementing the processes described in FIGS. 9-13.
  • the inclusion of the in-packet network linear coding module 1470 substantially improves the functionality of the network element 1400 to perform in-packet network coding to enable effective packet wash and packet enrichment. For instance, in accordance with the disclosed embodiments, when an error occurs to one or more coded blocks/chunks, an intermediate router can simply remove the corrupted blocks from the packet. When the packet eventually reaches the receiver, any coded blocks that are retained in the packet can be cached by the receiver and are useful for future decoding of the original payload after enough degrees of freedom are received. The receiver can request the sender to send more coded blocks to compensate for the missing degrees of freedom. Moreover, the sender does not need to know which coded blocks are lost in the way of transport.
  • the receiver can acknowledge the number of degrees of freedom it has received, and the sender can keep generating packets with new combinations until enough number of coded blocks have been received to decode the original data.
  • in-network caching is enabled by the intermediate routers. If there is a cached coded block belonging to the same Packet ID and it is orthogonal to the already received coded blocks, the cached coded block can be sent to the receiver immediately to reduce the number of missing degrees of freedom. In an embodiment, if random linear network coding is not applied to the coded blocks, only the exact same block matched in the caches can be used.
  • any linearly independent coded blocks could be utilized to drastically increase the degree of freedom at the receiver side.
  • the probability that such a chunk might be cached on the path of the acknowledgement packet forwarding is much larger than a particular un-coded block.
  • the probability that such chunk might be cached on the path of the acknowledgement packet forwarding is increased by designing the sender to break down the same data content that it hosts in the same way for different receivers.
  • the network avoids dropping a whole packet, eliminates no transport layer time-out, and avoids interruption to the transmission session to re-transmit a packet.

Abstract

One general aspect includes a method performed by a source node for communicating data packets. The method includes dividing a maximum payload size of a payload of a first data packet into a plurality of payload blocks having a same coded block size; performing linear network coding on the plurality of payload blocks of the first data packet; inserting metadata in the first data packet, the metadata may include a coefficient for each independent linear coded payload block in the first data packet, a unique packet identifier (ID) of the first data packet, an indication that the first data packet is a linear network coded packet type, and the coded block size; and transmitting the first data packet towards a destination node.

Description

In-Packet Network Coding
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to United States provisional patent application number 62/801,471 filed February 5, 2019 by Lijun Dong, and titled“In-Packet Coding to Enable Effective Packet Wash and Packet Enrichment,” which is incorporated by reference.
TECHNICAL FIELD
[0002] The present application relates to network communication, and more specifically to in packet coding to enable effective packet wash and packet enrichment.
BACKGROUND
[0003] In traditional routing networks, packets are cached and forwarded downstream. Therefore, if a routing node receives two packets from two sources it forwards them one after another, and queues the others in the meantime, even if both are headed for the same destination. This requires separate transmissions for each and every message delivered, which decreases network efficiency. Network coding exploits the characteristics of the broadcast communication channel in order to increase the capacity or the throughput of the network. For example, in network coding, algorithms can be used to merge those two messages received at the routing node and the accumulated result is forwarded to the destination. After receiving the accumulated message, it is decoded at the destination using the same algorithm.
SUMMARY
[0004] A first general aspect relates to a computer-implemented method that is performed by a source node for communicating data packets. The method includes dividing a maximum payload size of a payload of a first data packet into a plurality of payload blocks having a same coded block size. The method performs linear network coding on the plurality of payload blocks of the first data packet. The method inserts metadata into the first data packet. The metadata may include a coefficient for each independent linear coded payload block in the first data packet, a unique packet ID of the first data packet, an indication that the first data packet is a linear network coded packet type, and the coded block size. The method then transmits the first data packet towards a destination node. By utilizing the first aspect and other aspects disclosed herein, the network avoids dropping a whole packet, eliminates no transport layer time-out, and avoids interruption to the transmission session to re-transmit a packet.
[0005] In a first implementation form of the computer-implemented method according to the first aspect, the method further includes receiving an acknowledgement packet corresponding to the first data packet that was transmitted towards the destination node. The method determines from metadata in the acknowledgement packet whether the destination node received all of the payload blocks of the first data packet. If the destination node received all of the payload blocks of the first data packet, the method inserts a second plurality of independent linear coded payload blocks into a second data packet. If the destination node did not received all of the payload blocks of the first data packet, the method inserts new linear coded blocks that have coefficients that are orthogonal to the linear coded blocks received by the destination node in the first data packet. The method inserts metadata in the second data packet. The metadata includes a coefficient for each independent linear coded payload block in the second data packet, a unique packet identifier (ID) of the second data packet, an indication that the second data packet is the linear network coded packet type, and the coded block size. The method then transmits the second data packet towards the destination node. [0006] In a second implementation form of the computer-implemented method according to the first aspect or any preceding implementation of the first aspect, the method further includes inserting a portion of the second plurality of independent linear coded payload blocks along with the new linear coded blocks that have coefficients that are orthogonal to the linear coded blocks received by the destination node into the second data packet up to a maximum payload size of the second data packet.
[0007] In a third implementation form of the computer-implemented method according to the first aspect or any preceding implementation of the first aspect, the second data packet is transmitted only after receiving the acknowledgement packet corresponding to the first data packet.
[0008] In a fourth implementation form of the computer-implemented method according to the first aspect or any preceding implementation of the first aspect, the metadata is inserted into a BPP metadata header.
[0009] In a fifth implementation form of the computer-implemented method according to the first aspect or any preceding implementation of the first aspect, the new linear coded blocks are the linear coded blocks that were not received by the destination node in the first data packet.
[0010] A second general aspect relates to a computer-implemented method that is performed by a network node for communicating data packets. The method includes receiving a data packet that is to be forwarded towards a destination node. The method determines whether a network condition exists that requires dropping the data packet. If the network condition exists that requires dropping the data packet, the method determines whether the data packet is a linear network coded packet type. If the data packet is not a linear network coded packet type, the method drops the data packet. If the data packet is a linear network coded packet type, the method drops only a portion of the payload blocks of the first data packet, and removes coefficients in a BPP metadata header of the data packet that corresponds to the portion of the plurality of payload blocks dropped from the data packet. The method then forwards the data packet towards the destination node.
[0011] In a first implementation form of the computer-implemented method according to the second aspect, the method further includes caching the portion of the plurality of payload blocks dropped from the data packet, the coefficients corresponds to the portion of the plurality of payload blocks dropped from the data packet, and a packet ID of the data packet.
[0012] In a second implementation form of the computer-implemented method according to the second aspect or any preceding implementation of the second aspect, the method further includes determining whether a payload of the data packet is full based on a rank of the data packet if the network condition does not require dropping the data packet. The method determines whether there are cached payload blocks belonging to a same flow as the data packet if the payload of the data packet is not full. If there are cached payload blocks belonging to the same flow as the data packet, the method inserts the cached payload blocks into the payload of the data packet up to a maximum payload size of the data packet, adds coefficients corresponds to the cached payload blocks inserted into the data packet, and increases the rank of the data packet to account for the inserted cached payload blocks.
[0013] A third general aspect relates to a computer-implemented method that is performed by a network node for communicating data packets. The method includes receiving an acknowledgement packet to be forwarded towards a source node. The method determines whether the acknowledgement packet indicates that a first data packet received by a destination node of the data packet was missing a portion of a payload of the first data packet. If the acknowledgement packet indicates that the first data packet received by the destination node was missing a portion of the payload of the first data packet, the method determines whether the network node has cached data that contributes to decoding of the payload of the first data packet. If determination the network node has cached data that contributes to decoding of the payload of the first data packet, the method transmits a second data packet towards the destination node comprising the cached data that contributes to decoding of the payload of the first data packet. The method updates the acknowledgement packet to account for the cached data transmitted by the network node. The method forwards the acknowledgement packet towards the source node.
[0014] In a first implementation form of the computer-implemented method according to the third aspect, the cached data contains linear coded blocks that were not received by the destination node in the first data packet.
[0015] In a second implementation form of the computer-implemented method according to the third aspect or any preceding implementation of the third aspect, the cached data contains new linear coded blocks that have coefficients that are orthogonal to linear coded blocks received by the destination node in the first data packet.
[0016] A fourth general aspect relates to a computer-implemented method that is performed by a destination node for communicating data packets. The method includes receiving a data packet intended for the destination node. The method determines whether data packet is a linear network coded packet type. If the data packet is a linear network coded packet type, the method determines whether a full payload of the data packet is received by the destination node based on a current rank of the data packet. If the full payload of the data packet is received by the destination node, the method decodes the full payload of the data packet, and sends an acknowledgement packet for a next packet in the flow towards a source node. If the full payload of the data packet is not received by the destination node, the method sends an acknowledgement packet towards the source node that includes a packet ID, coefficients of received payload blocks in the data packet, the current rank, and a full rank; and waits for additional payload blocks that contribute to decoding the payload of the data packet.
[0017] Other embodiments of the above aspects include corresponding systems, apparatus, network nodes, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the preceding aspects and implementations thereof.
[0018] Additionally, for the purpose of clarity, any one of the foregoing embodiments may be combined with any one or more of the other foregoing embodiments to create a new embodiment within the scope of the present disclosure.
[0019] These and other features, and the advantages thereof, will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
[0021] FIG. 1 is a schematic diagram illustrating a data packet transmission being dropped due to a packet error.
[0022] FIG. 2 is a schematic diagram illustrating a data packet transmission being dropped due to a network condition.
[0023] FIG. 3 is a schematic diagram illustrating a data packet transmission in accordance with an embodiment of the present disclosure. [0024] FIG. 4 is a schematic diagram that illustrates linear network coding inside a data packet in accordance with an embodiment of the present disclosure.
[0025] FIG. 5 is a schematic diagram that illustrates a Big Packet Protocol (BPP) metadata block inside a data packet in accordance with an embodiment of the present disclosure.
[0026] FIG. 6 is a schematic diagram that illustrates linear network coding inside a data packet in accordance with an alternative embodiment of the present disclosure.
[0027] FIG. 7 is a schematic diagram that illustrates a BPP metadata block inside a data packet in accordance with an alternative embodiment of the present disclosure.
[0028] FIG. 8 is a schematic diagram that illustrates BPP metadata in an acknowledgement packet in accordance with an alternative embodiment of the present disclosure.
[0029] FIG. 9 is a flowchart illustrating a process for performing an in-network packet wash procedure in accordance with an embodiment of the present disclosure.
[0030] FIG. 10 is a flowchart illustrating a process for processing a data packet in accordance with an embodiment of the present disclosure.
[0031] FIG. 11 is a flowchart illustrating a process for performing in-network acknowledgement processing in accordance with an embodiment of the present disclosure.
[0032] FIG. 12 is a flowchart illustrating a process for performing acknowledgement processing at a source node in accordance with an embodiment of the present disclosure.
[0033] FIG. 13 is a flowchart illustrating a process for performing acknowledgement processing at a source node in accordance with an embodiment of the present disclosure.
[0034] FIG. 14 is a schematic diagram illustrating a network element according to an embodiment of the present disclosure. DETAILED DESCRIPTION
[0035] It should be understood at the outset that, although illustrative implementations of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
[0036] The disclosed embodiments provide several improvements to data communications. In general, linear network coding is performed to create data packets having a plurality of linear coded blocks in its payload. When an error occurs to one or more coded blocks/chunks or there is a network condition, an intermediate router can simply remove one or more coded blocks from the packet and forward the remainder of the packet. When the packet reaches the receiver, any coded blocks that are retained in the packet can be cached by the receiver. The retained coded blocks are useful for future decoding of the original payload after enough degrees of freedom are received. The receiver can request that the sender send more coded blocks to compensate for the missing degrees of freedom. Moreover, the sender does not need to know which coded blocks are lost in the way of transport. It only needs to send more (linearly independent) coded blocks of newly coded packets, in a number equal to or greater than the missing degrees of freedom. Additionally, in-network caching is enabled by the intermediate routers. If there is a cached coded block belonging to the same Packet ID and it is orthogonal to the already received coded blocks, the cached coded block can be sent to the receiver immediately to reduce the number of missing degrees of freedom. With the proposed mechanism of applying random linear network coding to the packet coded blocks, there is no need to match on a certain original chunk, but any linearly independent coded blocks could be utilized to drastically increase the degree of freedom at the receiver side. By utilizing the disclosed embodiments, the network avoids dropping a whole packet, eliminates no transport layer time-out, and avoids interruption to the transmission session to re-transmit a packet.
[0037] FIG. 1 is a schematic diagram illustrating a data packet transmission being dropped due to a packet error. In FIG. 1, a data packet 102 is being transmitted from a source node 104 to a destination node 106 over a network. The network can comprise one more wired or wireless/mobile networks. The data packet 102 is transmitted along a path that includes one or more network routers 110 that forwards the data packet 102 towards the destination node 106. The destination node 106 can be any type of device that is capable of requesting the data packet 102. In the current Internet, the data packet 102 will be completely dropped at a network router 110 when an error occurs during the packet transmission or propagation, which is not correctable by using any means (e.g., cyclic redundancy check (CRC)). As shown in FIG. 1, when this happens, the network router 110 drops the data packet 102 and sends a request to the source node 104 to retransmit the data packet 102.
[0038] Similarly, as shown in FIG. 2, when the network is congested, a policy may be adopted for the existing packets or the newly arriving packets at a network router 110 to be buffered and wait in the queue. However, if the buffer of the network router 110 is full, one or more of the data packets 102 are dropped. The network router 110 then sends a request to the source node 104 to retransmit the data packet 102. Other conditions may also cause the data packet 102 to be dropped including, but not limited to, when the size of the data packet 102 is bigger than maximum transmission unit (MTU) of the network router 110. As shown in FIG. 1 and FIG. 2, when the data packet 102 is dropped by a network router 110, retransmission of the data packet 102 is needed until the data packet 102 is received and acknowledged by the destination node 106 or the maximum retransmission time is reached. This process wastes the network resources, and causes longer latency. With the emerging use cases, such as holographic telepresence, tactile Internet, etc., that require extreme low latency, the present disclosure seeks to address the above issues to improve data packet communications.
[0039] FIG. 3 is a schematic diagram illustrating a data packet transmission in accordance with an embodiment of the present disclosure. In the depicted embodiment, the source node 104 applies linear network coding to the packet payload inside of the data packet 102 to enable packet wash to be performed, if needed, by a network router 110 inside the network. A packet wash is an operation that reduces the size of a payload of the data packet 102 while attempting to maintain as much of the content as possible. Packet washing may be necessary if a network is congested, an error occurs, or the packet exceeds an MTU of a network node. By performing a packet wash procedure, the network node is able to forward some portion of a data packet 102 without having to drop the entire data packet 102.
[0040] Linear network coding is a networking technique in which transmitted data is encoded and decoded to increase network throughput, reduce delays, and make the network more robust. Linear network coding generates new packets which are linear combinations of earlier received packets. With linear network coding, the bits in the information flow do not have to be delivered as complete data packets 102 to a receiving host. Instead, the bits in the information flow can be mixed. The receiving host only requires sufficient information to reconstruct the original packets from the source node 104.
[0041] In an embodiment, the linear network coding is formulated as:
Figure imgf000012_0001
[0043] xl x2, x3, ... , Xh are the original source packets.
[0044] The decoding is shown as following:
Figure imgf000012_0002
[0046] Node t can recover the source symbols xl, . . . , Xh as long as the matrix G, formed by the global encoding vectors, has (full) rank h as shown below:
Figure imgf000012_0003
[0048] In FIG. 3, if a network router 110 performs a packet wash procedure on the data packet 102 to reduce the payload of the data packet 102, when the data packet 102 reaches a receiving node, the receiving node can send an acknowledgement packet back to the source node 104. The acknowledgement packet can include metadata specifying the coded blocks that were missing from the payload of the data packet 102. Alternatively, the metadata can specify the coded blocks in the payload of the data packet 102 that were received at the receiving node. In some embodiments, the retransmission may not be needed if the receiving node has the capability or intelligence to deduce the entire information from what is left in the data packet 102 after a packet wash procedure is performed on the original packet by the network routers 110. Advantages of the disclosed embodiments include reducing network resource usage, better prioritization of network resources, and reducing latency of transmitting the packet due to no retransmission or due to retransmission of a smaller packet size after a packet wash procedure.
[0049] FIG. 4 is a schematic diagram that illustrates linear network coding inside a data packet 400 in accordance with an embodiment of the present disclosure. The data packet 400 can be an example of the data packet 102 that is originated from a source node, such as the source node 104 in FIG. 1. The data packet 400 includes a header 402, a BPP block 404, and packet payload 406. The header 402 can contain data that specifies the type of data packet, a source Internet protocol (IP) address, and a destination IP address. The BPP block 404 can contain directives that provide guidance for how the packet should be processed or what resources must be allocated for a flow, as well as metadata about the packet and the flow that the packet is a part of. BPP provides the extensions to the current IP packet to carry instructions and metadata information. For example, the BPP block 404 can include a BPP command block and a BPP metadata block. Commands can include, but are not limited to, determining conditions when to drop a packet, which queue to use, what resources to allocate, or when and how to perform a packet wash procedure. The BPP metadata block can be used to carry information about the packet, such as, but not limited to, information regarding the packet payload 406.
[0050] In an embodiment, the packet payload 406 is linear network coded into (h) number of coded blocks with the same block size. A constraint is that the number of blocks ( h ) times the block size is less than or equal to the payload size of the data packet 400 (i.e., h* block size <=payload size). After linear network coding on the blocks inside the data packet 400, the source node 104 will include the same number of independent coded blocks 408 in the data packet 400, which will have a full rank equal to h.
[0051] FIG. 5 is a schematic diagram that illustrates a BPP metadata block 500 inside the BPP block 404 of the data packet 400 in accordance with an embodiment of the present disclosure. In the depicted embodiment, the BPP metadata block 500 includes a packet identifier (ID) and block number 502, an indication of linear coding inside of payload 504, coded block size 506, coefficients 508, current rank 510, and full rank 512. The coded block size 506 is the size of each of the coded blocks 408 in the payload 406 of the data packet 400. The coded block size 506 can be used by a network node to find the chunk/coded block delineator when packet wash is performed. In an embodiment, the coded blocks 408 in the payload 406 of the data packet 400 are equal in significance. In other embodiments, the coded blocks 408 in the payload 406 of the data packet 400 may be prioritized, and lower priority coded blocks are removed before higher coded blocks when packet wash is performed. The coefficients 508 are data that is used to identify and decode the coded blocks 408 in the payload 406. The current rank 510 is the number of coded blocks 408 currently in the payload 406 of the data packet 400. The full rank 512 is the number of coded blocks 408 that can be contained in a full payload 406 of the data packet 400. The current rank 510 is adjusted when packet wash is performed on the data packet 400. For example, if a condition occurs that requires dropping the data packet 400, then a packet wash procedure is performed to drop some coded chunks (based on the chunk size) from the payload 406 of the data packet 400 as necessary. The current rank 510 is decreased by one for each coded block 408 that is removed from the payload of the data packet 400. The coefficients 508 and the full rank 512 are not changed.
[0052] When the data packet 400 reaches its destination, if the receiver finds that the coded chunks retained in the payload (current rank) are smaller than the full rank, then the receiver determines that packet wash occurred in the network. In an embodiment, the receiver will cache the coefficients 508 related to the remaining coded chunks, as well as the coded chunks for future decoding. The receiver will include the number of rank shortage in an acknowledgement packet and request that the sender send additional coded chunks based on this information.
[0053] FIG. 6 is a schematic diagram that illustrates linear network coding inside a data packet 600 in accordance with an alternative embodiment of the present disclosure. The data packet 600 is similar to the data packet 400, except that the coefficients 508 that are used to identify and decode the coded blocks 408 in the payload 406 are part of the payload 406 of the data packet 600 instead of being in the BPP metadata block 500 as shown in FIG. 5.
[0054] FIG. 7 is a schematic diagram that illustrates a BPP metadata block 700 inside the data packet 400 in accordance with an alternative embodiment of the present disclosure. In the depicted embodiment, the BPP metadata block 700 includes the indication of linear coding inside of payload 504, the coded block size 506, and the current rank 510 as described in FIG. 5. In this embodiment, the receiver can determine that packet wash occurred in the network based on the coefficients 508 included in the payload 406 as described in FIG. 6. The alternative embodiments of FIG. 6 and FIG. 7 can reduce the overhead of the data packet 600 and increase the overall transmission speed of the data packet 600.
[0055] FIG. 8 is a schematic diagram that illustrates BPP metadata 800 in an acknowledgement packet in accordance with an alternative embodiment of the present disclosure. As stated above, in an embodiment, if a receiver/destination device determines that packet wash occurred in the network, the receiver will cache the coefficients related to the remaining coded chunks, as well as the coded chunks for future decoding. The receiver will send an acknowledgement packet that includes the BPP metadata 800 and request that the sender send additional coded chunks based on this information.
[0056] In the depicted embodiment, the BPP metadata 800 includes a packet ID 802, a coded block size 804, coefficients 806, and rank short 808. The packet ID 802 is an identifier that identifies the data packet that was received by the destination device. The coded block size 804 indicates the coded block size received by the receiver in the data packet. The coefficients 806 specify the coefficients in the data packet received by the destination device. The rank short 808 indicates the number of coded blocks that were removed from the data packet received by the destination device.
[0057] FIG. 9 is a flowchart illustrating a process 900 for performing an in-network packet wash procedure in accordance with an embodiment of the present disclosure. The process 900 can be performed by any in-network node such as a router, switch, or any other network device. For example, the process 900 can be performed by the network routers 110 in FIGS. 1-3. After a coded packet is sent from a source (e.g., the source node 104 in FIG. 1), the process 900, at step 902, receives the coded packet at an in-network node, such as a BPP router, for forwarding towards a destination node. At step 904, the process 900 determines whether any condition occurs that requires dropping the packet. Non-limiting conditions that may require that the data packet be dropped include packet error, network congestion, buffer full, and a packet size that exceeds a MTU of the in-network node. If there are any conditions that require dropping the data packet, then the process 900, at step 906, determines whether the data packet includes a linear coded payload. This determination may be made based on the BPP metadata as described in FIG. 5 and FIG. 7. In an embodiment, if the process 900 determines that the data packet does not include a linear coded payload, the process 900, at step 916, drops the entire data packet, with process 900 terminating thereafter. If the process 900 determines that the data packet includes a linear coded payload, the process 900, at step 908, drops one or more of the coded blocks from the payload of the data packet as necessary to be able to forward the remaining payload towards the destination node. The process 900, at step 910, removes the corresponding coefficients of the dropped blocks. As described above, the coefficients can be either part of the payload of the data packet or part of the BPP metadata included in the data packet. In an embodiment, the process 900, at step 912, caches the coded blocks that are removed from the data packet, the corresponding coefficients of the removed coded blocks, and the packet ID of the data packet that the coded blocks are removed from. At step 914, the process 900 forwards the data packet with the remaining payload towards a destination node, with the process 900 terminating thereafter.
[0058] Back at step 904, if the process 900 determines that the network condition is good (i.e., the data packet does not have to be dropped), the in-network node attempts to find cached coded blocks that belong to the same packet ID in order to contribute to the rank. In an embodiment, the process 900, at step 918, determines whether the payload of the data packet is full by determining whether a current rank specified in the BPP metadata of the data packet is less than a full rank specified in the BPP metadata of the data packet. If the payload of the data packet is full (i.e., current rank equals full rank), then the process 900, at step 914, forwards the data packet with the remaining payload towards the destination node, with the process 900 terminating thereafter. If the current rank is less than the full rank, then the data packet is not full. This scenario may occur if the source node sent a partial packet or if the data packet was washed at a previous network node along the communication path. If the payload of the data packet is not full, then the process 900, at step 920, determines whether network condition allows more coded blocks to be added to the payload without exceeding an MTU size. If the network condition does not allow more coded blocks to be added to the payload or adding to the payload would exceed the MTU size, the process 900, at step 914, forwards the data packet with the remaining payload towards the destination node, with the process 900 terminating thereafter.
[0059] If the network condition allows more coded blocks to be added to the payload without exceeding an MTU size, the process 900, at step 922, determines whether there are cached coded blocks that belong to the same packet ID and if there are corresponding coefficients to contribute to the rank. If there are no cached coded blocks that belong to the same packet ID whose coefficient contribute to the rank, the process 900, at step 914, forwards the data packet with the remaining payload towards the destination node, with the process 900 terminating thereafter.
[0060] If there are one or more cached coded blocks that belong to the same packet ID whose coefficients contribute to the rank, the process 900, at step 924, inserts additional cached coded blocks to the end of the payload, while ensuring that the current rank remains lesser than or equal to the full rank. The process 900, at step 926, then adds the coefficient, corresponding to the additional cached coded blocks that were added to the end of the payload, to the coefficients metadata in the data packet. The process 900, at step 928, increases the current rank to account for the added coded blocks. The process 900, at step 914, forwards the data packet with the updated payload towards the destination node, with the process 900 terminating thereafter.
[0061] FIG. 10 is a flowchart illustrating a process 1000 for processing a data packet in accordance with an embodiment of the present disclosure. The process 1000 can be performed by any destination node such as the destination node 106 in FIGS. 1-3. The process 1000 begins, at step 1002, by receiving a data packet whose destination is the current/receiving node as specified in a header of the data packet. The process 1000, at step 1004, determines whether BPP metadata contained in the data packet indicates the data packet includes a linear coded payload. If the BPP metadata contained in the data packet does not indicate that the data packet includes a linear coded payload (i.e., the data packet has a regular packet payload), the process 1000, at step 1014, can successfully decode the data packet. The process 1000, at step 1016, requests the next packet in the flow from the source by sending an acknowledge packet to the source indicating that the data packet was successfully decoded at the source, with the process 1000 terminating thereafter. [0062] If, at step 1004, the process 1000 determines that the BPP metadata contained in the data packet indicates that the data packet includes a linear coded payload, then the process 1000, at step 1006, determines whether the current rank specified in the BPP metadata of the data packet is equal to the full rank specified in the BPP metadata of the data packet. If the process 1000 determines that the current rank specified in the BPP metadata of the data packet is equal to the full rank specified in the BPP metadata of the data packet, the process 1000, at step 1014, can successfully decode the data packet with the coefficients contained in the data packet. The process 1000, at step 1016, requests the next packet in the flow from the source by sending an acknowledge packet to the source indicating that the data packet was successfully decoded at the source, with the process 1000 terminating thereafter.
[0063] If, at step 1006, the process 1000 determines that the current rank specified in the BPP metadata of the data packet is not equal to the full rank specified in the BPP metadata of the data packet (i.e., some of the coded blocks in the payload were removed in network by a packet wash procedure), the data packet cannot be decoded. The process 1000, at step 1008, records the received coded data packet, a packet ID, the coefficients of the coded blocks received in the payload of the data packet, and the current rank and full rank specified in the BPP metadata of the data packet. At step 1010, the process 1000 sends an acknowledgment packet with some or all of the recorded information included in the metadata to the source. For example, the process 1000 can send an acknowledgment packet containing the packet ID, coded block size, the coefficients of the coded blocks received in the payload of the data packet, and a rank short (i.e., rank short =full rank minus current rank) as shown in FIG. 8. At step 1012, the process 1000 waits for the additional coded blocks to decode the entire data packet, with the process 1000 terminating thereafter. In an embodiment, the additional coded blocks can be one or more of the coded blocks that were missing/removed from the original data packet and/or coded blocks that have coefficients that are orthogonal to the already received coded blocks.
[0064] FIG. 11 is a flowchart illustrating a process 1100 for performing in-network acknowledgement processing in accordance with an embodiment of the present disclosure. The process 1100 can be performed by any network node such as the network routers 110 in FIGS. 1- 3. The process 1100 begins, at step 1102, by receiving an acknowledgement packet in route to a source node that includes a rank short that indicates that the data packet received by the destination node was missing one or more linear coded blocks in the payload of the data packet. For example, the acknowledgement packet could include the packet ID, coded block size, the coefficients of the coded blocks received in the payload of the data packet, and a rank short as shown in FIG. 8. In order to contribute to the rank, the process 1100, at step 1104, determines whether there are cached coded blocks that have the same coded block size that belong to the same packet ID (i.e., same packet flow) that are stored at the receiving network node. If the process 1100 determines that there are no cached coded blocks that have the same coded block size that belong to the same packet ID that are stored at the receiving network node, the process 1100, at step 1118, forwards the acknowledgement packet towards the source node, with the process 1100 terminating thereafter.
[0065] If the process 1100 determines that there are cached coded blocks that have the same coded block size that belong to the same packet ID that are stored at the receiving network node, the process 1100, at step 1106, determines whether the coefficient of the cached coded block contributes to the rank. In an embodiment, the coefficient of the cached coded block contributes to the rank only if they are orthogonal to the coefficient of the already received coded blocks at the destination node because the destination node can only decode the already received coded blocks with coded blocks that have orthogonal coefficient (e.g., using EQ.2). If the process 1100 determines that there are no cached coded blocks that have a coefficient that contributes to the rank, the process 1100, at step 1118, forwards the acknowledgement packet towards the source node, with the process 1100 terminating thereafter. If the process 1100 determines that there are cached coded blocks that have coefficients that contribute to the rank, the process 1100, at step 1108, transmits a data packet that includes the one or more cached coded blocks and the corresponding coefficient that contributes to the rank to the destination node.
[0066] At step 1110, the process 1100 updates the rank short in the acknowledgement packet accordingly (e.g., rank short = rank short - number cached coded blocks returned to destination node). At step 1112, the process 1100 determines whether the updated rank short equals to zero. If the rank short can be reduced to zero, then, at step 1116, the process 1100 modifies the acknowledgement packet to remove all metadata information and request the next packet in the flow from the source node. The process 1100, at step 1118, forwards the acknowledgement packet that requests the next packet towards the source node, with the process 1100 terminating thereafter.
[0067] If, at step 1112, the process 1100 determines that the updated rank short does not equal zero, the process 1100, at step 1114, modifies the acknowledgement packet by adding the coefficient corresponding to the cached coded blocks returned to destination node to the acknowledgement packet. The process 1100, at step 1118, forwards the modified acknowledgement packet towards the source node, with the process 1100 terminating thereafter.
[0068] FIG. 12 is a flowchart illustrating a process 1200 for performing acknowledgement processing at a source node in accordance with an embodiment of the present disclosure. The process 1200 can be performed by any source node such as the source node 104 in FIGS. 1-3. The process 1200 begins, at step 1202, by receiving at a source node an acknowledgement packet that may include metadata information. If the acknowledgement packet does not include metadata information, then the previous packet was successfully received by the destination. If the acknowledgement packet includes metadata information, the process 1200, at step 1204, determines whether the acknowledgement packet indicates that there is a rank short (i.e., rank short greater than zero) meaning that the destination node received less than the original number of linear coded blocks in the previous data packet. If the process 1200 determines that the rank short is not greater than zero, the process 1200, at step 1206, determines whether there is a next data packet in the flow. If there is no more data packet in the flow, the process 1200, at step 1214, ends the flow, with the process 1200 terminating thereafter. If there is a next data packet in the flow, the process 1200, at step 1208, linearly codes the blocks in the next data packet. The process 1200, at step 1216, then adds the metadata information in the BPP block of the data packet, and forwards the data packet towards the destination node, with the process 1200 terminating thereafter.
[0069] Back at step 1204, if the process 1200 determines that the acknowledgement packet indicates that there is a rank short (i.e., rank short greater than zero), the process 1200, at step 1210, linearly codes a data packet with coded blocks whose coefficients are orthogonal to the coefficient of the already received coded blocks at the destination node. These coded blocks could be the missing coded blocks from the previous data packet and/or could be any coded blocks whose coefficients are orthogonal to the coefficient of the already received coded blocks at the destination node to enable the destination node to decode the previously received coded blocks. In an embodiment, the data packet only includes these coded blocks in the data packet for transmission to the destination node (i.e., number of coded blocks in the data packet equals the rank short indicated in the acknowledgement packet). In another embodiment, if the network condition may result in packet wash, the data packet could include these coded blocks and other coded blocks in the data packet (i.e., the number of coded blocks in the data packet are larger to the rank short). For example, if the process 1200, at step 1212, determines that there is a next packet in the flow, the process 1200, at step 1214, can linearly code the additional blocks for the current packet and the one or more blocks of the next packet, and include them in a single data packet. This embodiment can provide an increase in the rate of delivery of the data packets. The process 1200, at step 1216, then adds the metadata information in the BPP block of the data packet, and forwards the data packet towards the destination node, with the process 1200 terminating thereafter.
[0070] As an example, assume that the source node has multiple packets to send to the destination in the flow, each of them can be divided into 4 blocks.
'Cίί X12 X13 X14
[0071] X21 X22 X23 X24
X31 X32 X33 X34
[0072] Assume the full rank of the coded blocks in the first packet is 4, and that the destination node received the coded blocks with rank 2, which are returned in the metadata of the acknowledgement packet as shown below:
[0073] [ 1 0 0 01
L0 1 0 OJ
[0074] After the source receives the acknowledgement packet, the process 1200 determines that two of the coded blocks were lost in the transmission. Thus, the process 1200 needs to send additional coded blocks of the first packet. In an embodiment, only 2 coded blocks of the first packet need to be sent in the next roundtrip. However, the payload may still have space for two more coded blocks. In order to save the number of roundtrips and improve the efficiency of the transmission, the source node can code two blocks from the second packet. The chosen coefficient needs to be orthogonal to the following extended matrix in order to decode the 4 coded blocks of the first packet and 2 coded blocks of the second packet:
Figure imgf000024_0001
[0076] The packet IDs of the packet 1 and packet 2 are included in the metadata. Additionally, the coded block numbers of the four coded blocks in the first packet and the coded block numbers of the first two coded blocks of the second packet blocks are also indicated in the metadata. This embodiment can be adopted when the network condition is good to avoid in- network packet wash on the packet because if some of the coded blocks are dropped in transmission, the destination would not be able to decode the first packet.
[0077] FIG. 13 is a flowchart illustrating a process 1300 for performing acknowledgement processing at a source node in accordance with an embodiment of the present disclosure. The process 1300 can be performed by any source node such as the source node 104 in FIGS. 1-3. The process 1300 begins, at step 1302, by dividing a maximum payload size of a data packet into a plurality of payload blocks having a same coded block size. The process 1300, at step 1304, performs linear network coding on the plurality of payload blocks of the data packet. Each of the payload blocks inside the data packet is an independent linear coded payload block. The process 1300, at step 1306, inserts metadata into the data packet. In an embodiment, the metadata is inserted in a BPP metadata header of the data packet. In an embodiment, the metadata includes a coefficient for each independent linear coded payload block in the data packet, a unique packet identifier (ID) of the data packet, an indication that the data packet is a linear network coded packet type, and the coded block size. At step 1308, the process 1300 transmits the data packet towards a destination node.
[0078] FIG. 14 is a schematic diagram illustrating a network element 1400 according to an embodiment of the present disclosure. The network element 1400 can be any type of network node such as, but not limited to, source node 104, destination node 106, and network router 110 in FIG. 1. The network element 1400 includes receiver units (RX) or receiving means 1420 for receiving data via ingress ports 1410. The network element 1400 also includes transmitter units (TX) or transmitting means 1440 for transmitting via data egress ports 1450.
[0079] The network element 1400 includes a memory or data storing means 1460 for storing the instructions and various data. The memory/data storing means 1460 can be any type of or combination of memory components capable of storing data and/or instructions. For example, the memory/data storing means 1460 can include volatile and/or non-volatile memory such as read-only memory (ROM), random access memory (RAM), ternary content-addressable memory (TCAM), and/or static random-access memory (SRAM). The memory/data storing means 1460 can also include one or more disks, tape drives, and solid-state drives. In some embodiments, the memory/data storing means 1460 can be used as an over- flow data storage device to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution.
[0080] The network element 1400 has one or more processor or processing means 1430 (e.g., central processing unit (CPU)) to process instructions. The processor/processing means 1430 may be implemented as one or more CPU chips, cores (e.g., as a multi-core processor), field- programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and digital signal processors (DSPs). The processor/processing means 1430 is communicatively coupled via a system bus with the ingress ports 1410, RX 1420, TX 1440, egress ports 1450, and memory/data storing means 1460. In some embodiments, the memory/data storing means 1460 can be memory that is integrated with the processor/processing means 1430.
[0081] The processor/processing means 1430 can be configured to execute instmctions stored in the memory/data storing means 1460. Thus, the processor/processing means 1430 is able to perform any computational, comparison, determination, coding, configuration, or any other action corresponding to the claims when the appropriate instruction is executed by the processor. As an example, the memory/data storing means 1460 can store an in-packet network linear coding module 1470. The in-packet network linear coding module 1470 includes data and executable instructions for implementing the disclosed embodiments. For instance, the in-packet network linear coding module 1470 can include instructions for implementing the processes described in FIGS. 9-13.
[0082] The inclusion of the in-packet network linear coding module 1470 substantially improves the functionality of the network element 1400 to perform in-packet network coding to enable effective packet wash and packet enrichment. For instance, in accordance with the disclosed embodiments, when an error occurs to one or more coded blocks/chunks, an intermediate router can simply remove the corrupted blocks from the packet. When the packet eventually reaches the receiver, any coded blocks that are retained in the packet can be cached by the receiver and are useful for future decoding of the original payload after enough degrees of freedom are received. The receiver can request the sender to send more coded blocks to compensate for the missing degrees of freedom. Moreover, the sender does not need to know which coded blocks are lost in the way of transport. It only needs to send more (linearly independent) coded blocks of newly coded packets, in a number equal to or greater than the missing degrees of freedom. Alternatively, the receiver can acknowledge the number of degrees of freedom it has received, and the sender can keep generating packets with new combinations until enough number of coded blocks have been received to decode the original data. Additionally, in-network caching is enabled by the intermediate routers. If there is a cached coded block belonging to the same Packet ID and it is orthogonal to the already received coded blocks, the cached coded block can be sent to the receiver immediately to reduce the number of missing degrees of freedom. In an embodiment, if random linear network coding is not applied to the coded blocks, only the exact same block matched in the caches can be used. However, with the proposed mechanism of applying random linear network coding to the packet coded blocks, there is no need to match on a certain original chunk, but any linearly independent coded blocks could be utilized to drastically increase the degree of freedom at the receiver side. The probability that such a chunk might be cached on the path of the acknowledgement packet forwarding is much larger than a particular un-coded block. In an embodiment, the probability that such chunk might be cached on the path of the acknowledgement packet forwarding is increased by designing the sender to break down the same data content that it hosts in the same way for different receivers. By utilizing the disclosed embodiments, the network avoids dropping a whole packet, eliminates no transport layer time-out, and avoids interruption to the transmission session to re-transmit a packet.
[0083] While several embodiments have been provided in the present disclosure, it may be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the disclosure is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
[0084] In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and may be made without departing from the spirit and scope disclosed herein.
[0085] Following the claims below is a document that may be submitted to a standards body and which embodies the present disclosure.

Claims

1. A method performed by a source node for communicating data packets, the method comprising:
dividing a maximum payload size of a payload of a first data packet into a plurality of payload blocks having a same coded block size;
performing linear network coding on the plurality of payload blocks of the first data packet, wherein each of the payload blocks of the payload inside the first data packet is an independent linear coded payload block;
inserting metadata in the first data packet, the metadata comprising a coefficient for each independent linear coded payload block in the first data packet, a unique packet identifier (ID) of the first data packet, an indication that the first data packet is a linear network coded packet type, and the coded block size; and
transmitting the first data packet towards a destination node.
2. The method of claim 1, wherein the coefficient for each independent linear coded payload block is inserted into the payload of the first data packet.
3. The method according to any of claims 1-2, further comprising:
receiving an acknowledgement packet corresponding to the first data packet that was transmitted towards the destination node;
determining from metadata in the acknowledgement packet that the destination node received all of the payload blocks of the first data packet;
responsive to the determination that the destination node received all of the payload blocks of the first data packet, inserting, in a payload of a second data packet, a second plurality of independent linear coded payload blocks;
inserting metadata in the second data packet, the metadata comprising a coefficient for each independent linear coded payload block in the second data packet, a unique packet identifier (ID) of the second data packet, an indication that the second data packet is the linear network coded packet type, and the coded block size; and
transmitting the second data packet towards the destination node.
4. The method according to any of claims 1-2, further comprising:
receiving an acknowledgement packet corresponding to the first data packet that was transmitted towards the destination node;
determining from metadata in the acknowledgement packet that the destination node did not receive all of the payload blocks of the first data packet;
responsive to the determination that the destination node did not receive all of the payload blocks of the first data packet, inserting, in a payload of a second data packet, new linear coded blocks that have coefficients that are orthogonal to the linear coded blocks received by the destination node in the first data packet;
inserting metadata in the second data packet, the metadata comprising a coefficient for each independent linear coded payload block in the second data packet, a unique packet identifier (ID) of the second data packet, an indication that the second data packet is the linear network coded packet type, and the coded block size; and
transmitting the second data packet towards the destination node.
5. The method according to any of claim 4, further comprising: inserting additional linear coded payload blocks in the payload of the second data packet up to a maximum payload size of the second data packet.
6. The method according to any of claims 3-5, wherein the second data packet is transmitted only after receiving the acknowledgement packet corresponding to the first data packet.
7. The method according to any of claims 3-6, wherein the metadata is inserted into a Big Packet Protocol (BPP) metadata header.
8. The method according to any of claims 4-7, wherein the new linear coded blocks are the linear coded blocks that were not received by the destination node in the first data packet.
9. A method performed by a network node for communicating data packets, the method comprising:
receiving a data packet that is to be forwarded towards a destination node;
determining that a network condition exists that requires dropping the data packet;
response to the determination that the network condition exists that requires dropping the data packet, determining that the data packet is a linear network coded packet type;
responsive to the determination that the data packet is the linear network coded packet type, dropping a portion of a plurality of payload linear coded blocks from a payload of the first data packet, and removing coefficients in a Big Packet Protocol (BPP) metadata header of the data packet that correspond to the portion of the plurality of payload linear coded blocks dropped from the payload of the data packet; and
forwarding the data packet towards the destination node.
10. The method of claim 9, further comprising: caching the portion of the plurality of payload linear coded blocks dropped from the payload of the data packet, the coefficients corresponding to the portion of the plurality of payload linear coded blocks dropped from the payload of the data packet, and a packet identifier (ID) of the data packet.
11. A method performed by a network node for communicating data packets, the method comprising:
receiving a data packet that is to be forwarded towards a destination node;
determining that a network condition does not require dropping the data packet;
responsive to the determination that the network condition does not require dropping the data packet, determining that the payload of the data packet is not full based on a rank of the data packet;
responsive to the determination that the payload of the data packet is not full, determining that there are cached payload linear coded blocks belonging to a same flow as the data packet and able to increase the rank of the linear coded blocks in the payload; and
responsive to the determination that there are cached payload linear coded blocks belonging to the same flow as the data packet and able to increase the rank of the linear coded blocks in the payload, inserting the cached payload linear coded blocks into the payload of the data packet up to a maximum payload size of the data packet, adding coefficients corresponding to the cached payload linear coded blocks inserted into the data packet, and increasing the rank of the data packet to account for the inserted cached payload linear coded blocks.
12. A method performed by a network node for communicating data packets, the method comprising: receiving an acknowledgement packet to be forwarded towards a source node; determining that the acknowledgement packet indicates that a first data packet received by a destination node of the data packet was missing a portion of a payload of the first data packet;
responsive to the determination that the acknowledgement packet indicates that the first data packet received by the destination node was missing a portion of the payload of the first data packet, determining that the network node has cached data that contributes to decoding of the payload of the first data packet; and
responsive to the determination that the network node has cached data that contributes to decoding of the payload of the first data packet, transmitting a second data packet towards the destination node comprising the cached data that contributes to decoding of the payload of the first data packet, updating the acknowledgement packet to account for the cached data transmitted by the network node, and forwarding the acknowledgement packet towards the source node.
13. The method of claim 12, wherein the cached data contains linear coded blocks that were not received by the destination node in the first data packet.
14. The method of claim 12, wherein the cached data contains new linear coded blocks that have coefficients that are orthogonal to linear coded blocks received by the destination node in the first data packet.
15. A method performed by a destination node for communicating data packets, the method comprising: receiving a data packet intended for the destination node;
determining that the data packet is a linear network coded packet type;
responsive to the determination that the data packet is a linear network coded packet type, determining that a full payload of the data packet is received by the destination node based on a current rank of the data packet;
responsive to the determination that the full payload of the data packet is received by the destination node, decoding the full payload of the data packet, sending an acknowledgement packet for a next packet in the flow towards a source node.
16. A method performed by a destination node for communicating data packets, the method comprising:
receiving a data packet intended for the destination node;
determining that the data packet is a linear network coded packet type;
responsive to the determination that the data packet is a linear network coded packet type, determining that a full payload of the data packet is not received by the destination node based on a current rank of the data packet;
responsive to a determination that the full payload of the data packet is not received by the destination node, sending an acknowledgement packet towards the source node that includes a packet identifier (ID), coefficients of received payload linear coded blocks in the data packet, the current rank, and a full rank; and
waiting for additional payload linear coded blocks that contribute to decoding of the payload of the data packet.
17. A source node comprising: a memory storing instructions;
a processor coupled to the memory, the processor configured to execute the instructions to cause the source node to:
divide a maximum payload size of a payload of a first data packet into a plurality of payload blocks having a same coded block size;
perform linear network coding on the plurality of payload blocks of the first data packet, wherein each of the payload blocks of the payload inside the first data packet is an independent linear coded payload block;
insert metadata in the first data packet, the metadata comprising a coefficient for each independent linear coded payload block in the first data packet, a unique packet identifier (ID) of the first data packet, an indication that the first data packet is a linear network coded packet type, and the coded block size; and
transmit the first data packet towards a destination node.
18. The source node of claim 17, wherein the coefficient for each independent linear coded payload block is inserted into the payload of the first data packet.
19. The source node according to any of claims 17-18, wherein the processor is further configured to execute the instructions to cause the source node to:
receive an acknowledgement packet corresponding to the first data packet that was transmitted towards the destination node;
determine from metadata in the acknowledgement packet that the destination node received all of the payload blocks of the first data packet;
responsive to the determination that the destination node received all of the payload blocks of the first data packet, insert, in a payload of a second data packet, a second plurality of independent linear coded payload blocks;
insert metadata in the second data packet, the metadata comprising a coefficient for each independent linear coded payload block in the second data packet, a unique packet identifier (ID) of the second data packet, an indication that the second data packet is the linear network coded packet type, and the coded block size; and
transmit the second data packet towards the destination node.
20. The source node according to any of claims 17-18, wherein the processor is further configured to execute the instructions to cause the source node to:
receive an acknowledgement packet corresponding to the first data packet that was transmitted towards the destination node;
determine from metadata in the acknowledgement packet that the destination node did not receive all of the payload blocks of the first data packet;
responsive to the determination that the destination node did not receive all of the payload blocks of the first data packet, insert, in a payload of a second data packet, new linear coded blocks that have coefficients that are orthogonal to the linear coded blocks received by the destination node in the first data packet;
insert metadata in the second data packet, the metadata comprising a coefficient for each independent linear coded payload block in the second data packet, a unique packet identifier (ID) of the second data packet, an indication that the second data packet is the linear network coded packet type, and the coded block size; and
transmit the second data packet towards the destination node.
21. The source node according to claim 20, wherein the processor is further configured to execute the instructions to cause the source node to:
insert, in the payload of the second data packet, additional linear coded payload blocks up to a maximum payload size of the second data packet.
22. The source node according to any of claims 19-21, wherein the second data packet is transmitted only after receive the acknowledgement packet corresponding to the first data packet.
23. The source node according to any of claims 19-22, wherein the metadata is inserted into a Big Packet Protocol (BPP) metadata header.
24. The source node according to any of claims 20-23, wherein the new linear coded blocks are the linear coded blocks that were not received by the destination node in the first data packet.
25. A network node comprising:
a memory storing instructions;
a processor coupled to the memory, the processor configured to execute the instructions to cause the network node to:
receive a data packet that is to be forwarded towards a destination node;
determine that a network condition exists that requires dropping the data packet; response to the determination that the network condition exists that requires dropping the data packet, determine that the data packet is a linear network coded packet type;
responsive to the determination that the data packet is the linear network coded packet type, drop a portion of a plurality of payload linear coded blocks from a payload of the first data packet, and remove coefficients in a Big Packet Protocol (BPP) metadata header of the data packet that correspond to the portion of the plurality of payload linear coded blocks dropped from the payload of the data packet; and
forward the data packet towards the destination node.
26. The network node of claim 25, wherein the processor is further configured to execute the instructions to cause the source node to:
cache the portion of the plurality of payload linear coded blocks dropped from the payload of the data packet, the coefficients corresponding to the portion of the plurality of payload linear coded blocks dropped from the payload of the data packet, and a packet identifier (ID) of the data packet.
27. A network node comprising:
a memory storing instructions;
a processor coupled to the memory, the processor configured to execute the instructions to cause the network node to:
receive a data packet that is to be forwarded towards a destination node;
determine that a network condition does not require dropping the data packet; responsive to the determination that the network condition does not require dropping the data packet, determine that the payload of the data packet is not full based on a rank of the data packet;
responsive to the determination that the payload of the data packet is not full, determine that there are cached payload linear coded blocks belonging to a same flow as the data packet and can increase the rank of the linear coded blocks in the payload; and responsive to the determination that there are cached payload linear coded blocks belonging to the same flow as the data packet and can increase the rank of the linear coded blocks in the payload, insert the cached payload linear coded blocks into the payload of the data packet up to a maximum payload size of the data packet, add coefficients corresponding to the cached payload linear coded blocks inserted into the data packet, and increase the rank of the data packet to account for the inserted cached payload linear coded blocks.
28. A network node comprising:
a memory storing instructions;
a processor coupled to the memory, the processor configured to execute the instructions to cause the network node to:
receive an acknowledgement packet to be forwarded towards a source node;
determine that the acknowledgement packet indicates that a first data packet received by a destination node of the data packet was missing a portion of a payload of the first data packet;
responsive to the determination that the acknowledgement packet indicates that the first data packet received by the destination node was missing a portion of the payload of the first data packet, determine that the network node has cached data that contributes to decoding of the payload of the first data packet; and
responsive to the determination that the network node has cached data that contributes to decoding of the payload of the first data packet, transmit a second data packet towards the destination node comprising the cached data that contributes to decoding of the payload of the first data packet, update the acknowledgement packet to account for the cached data transmitted by the network node, and forward the acknowledgement packet towards the source node.
29. The network node of claim 28, wherein the cached data contains linear coded blocks that were not received by the destination node in the first data packet.
30. The network node of claim 28, wherein the cached data contains new linear coded blocks that have coefficients that are orthogonal to linear coded blocks received by the destination node in the first data packet.
31. A destination node comprising:
a memory storing instructions;
a processor coupled to the memory, the processor configured to execute the instructions to cause the network node to:
receive a data packet intended for the destination node;
determine that the data packet is a linear network coded packet type; responsive to the determination that the data packet is a linear network coded packet type, determine that a full payload of the data packet is received by the destination node based on a current rank of the data packet;
responsive to the determination that the full payload of the data packet is received by the destination node, decode the full payload of the data packet, sending an acknowledgement packet for a next packet in the flow towards a source node.
32 . A destination node comprising: a memory storing instructions;
a processor coupled to the memory, the processor configured to execute the instructions to cause the network node to:
receive a data packet intended for the destination node;
determine that the data packet is a linear network coded packet type; responsive to the determination that the data packet is a linear network coded packet type, determine that a full payload of the data packet is not received by the destination node based on a current rank of the data packet;
responsive to a determination that the full payload of the data packet is not received by the destination node, send an acknowledgement packet towards the source node that includes a packet identifier (ID), coefficients of received payload linear coded blocks in the data packet, the current rank, and a full rank; and
wait for additional payload linear coded blocks that contribute to decoding of the payload of the data packet.
33. An apparatus comprising:
a network communication means for communicating data over a network;
a data storage means for storing instructions, and
a processing means coupled to the data storage means, the processing means configured to execute the instructions to implement any of the methods according to claims 1-16.
PCT/US2020/015538 2019-02-05 2020-01-29 In-packet network coding WO2020163124A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962801471P 2019-02-05 2019-02-05
US62/801,471 2019-02-05

Publications (1)

Publication Number Publication Date
WO2020163124A1 true WO2020163124A1 (en) 2020-08-13

Family

ID=69740633

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/015538 WO2020163124A1 (en) 2019-02-05 2020-01-29 In-packet network coding

Country Status (1)

Country Link
WO (1) WO2020163124A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022098377A1 (en) * 2020-12-17 2022-05-12 Futurewei Technologies, Inc. Qualitative communication using adaptive network coding with a sliding window
WO2022206649A1 (en) * 2021-04-02 2022-10-06 维沃移动通信有限公司 Congestion control method and apparatus, and communication device
CN115361346A (en) * 2022-08-08 2022-11-18 清华大学 Explicit packet loss notification mechanism

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013086276A1 (en) * 2011-12-09 2013-06-13 Huawei Technologies, Co., Ltd. Method for network coding packets in content-centric networking based networks
US20170012885A1 (en) * 2015-07-07 2017-01-12 Speedy Packets, Inc. Network communication recoding node
WO2020072132A1 (en) * 2018-10-01 2020-04-09 Futurewei Technologies, Inc. Method and apparatus for packet wash in networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013086276A1 (en) * 2011-12-09 2013-06-13 Huawei Technologies, Co., Ltd. Method for network coding packets in content-centric networking based networks
US20170012885A1 (en) * 2015-07-07 2017-01-12 Speedy Packets, Inc. Network communication recoding node
WO2020072132A1 (en) * 2018-10-01 2020-04-09 Futurewei Technologies, Inc. Method and apparatus for packet wash in networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DONG LIJUN ET AL: "In-Packet Network Coding for Effective Packet Wash and Packet Enrichment", 2019 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), IEEE, 9 December 2019 (2019-12-09), pages 1 - 6, XP033735228, DOI: 10.1109/GCWKSHPS45667.2019.9024623 *
JAY KUMAR SUNDARARAJAN ET AL: "Network coding meets TCP", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 29 September 2008 (2008-09-29), XP080437914, DOI: 10.1109/INFCOM.2009.5061931 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022098377A1 (en) * 2020-12-17 2022-05-12 Futurewei Technologies, Inc. Qualitative communication using adaptive network coding with a sliding window
WO2022206649A1 (en) * 2021-04-02 2022-10-06 维沃移动通信有限公司 Congestion control method and apparatus, and communication device
CN115361346A (en) * 2022-08-08 2022-11-18 清华大学 Explicit packet loss notification mechanism
CN115361346B (en) * 2022-08-08 2024-03-29 清华大学 Explicit packet loss notification mechanism

Similar Documents

Publication Publication Date Title
CN111740808B (en) Data transmission method and device
US9253608B2 (en) Wireless reliability architecture and methods using network coding
WO2020163124A1 (en) In-packet network coding
US11424861B2 (en) System and technique for sliding window network coding-based packet generation
RU2469482C2 (en) Method and system for data transfer in data transfer network
EP3035638A1 (en) Interest acknowledgements for information centric networking
WO2017211096A1 (en) Method and device for transmitting data stream
Dong et al. In-packet network coding for effective packet wash and packet enrichment
WO2020210779A2 (en) Coded data chunks for network qualitative services
US11888960B2 (en) Packet processing method and apparatus
Dong et al. Qualitative communication via network coding and New IP
US20230353284A1 (en) Qualitative Communication Using Adaptive Network Coding with a Sliding Window
US20230163875A1 (en) Method and apparatus for packet wash in networks
WO2019011219A1 (en) Media content-based adaptive method, device and system for fec coding and decoding of systematic code, and medium
US10110350B2 (en) Method and system for flow control
Gupta et al. Fast interest recovery in content centric networking under lossy environment
CN111385069A (en) Data transmission method and computer equipment
CN109792444B (en) Play-out buffering in a live content distribution system
US20080091841A1 (en) Communication method, communication system, communication apparatus, and recording medium
Dong et al. Adaptive Network Coding Based Qualitative Communication
CN117459460A (en) Method, device, equipment, network system and storage medium for processing network congestion
WO2024049442A1 (en) An efficient mechanism to process qualitative packets in a router
Zhang et al. A novel retransmission scheme for video services in hybrid wireline/wireless networks
Yu A new mechanism to enhance transfer performance over wired-cum-wireless networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20708836

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20708836

Country of ref document: EP

Kind code of ref document: A1