WO2024049442A1 - An efficient mechanism to process qualitative packets in a router - Google Patents

An efficient mechanism to process qualitative packets in a router Download PDF

Info

Publication number
WO2024049442A1
WO2024049442A1 PCT/US2022/042453 US2022042453W WO2024049442A1 WO 2024049442 A1 WO2024049442 A1 WO 2024049442A1 US 2022042453 W US2022042453 W US 2022042453W WO 2024049442 A1 WO2024049442 A1 WO 2024049442A1
Authority
WO
WIPO (PCT)
Prior art keywords
chunks
packet
network node
buffer queues
separate buffer
Prior art date
Application number
PCT/US2022/042453
Other languages
French (fr)
Inventor
Cedric Westphal
Renwei Li
Original Assignee
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurewei Technologies, Inc. filed Critical Futurewei Technologies, Inc.
Priority to PCT/US2022/042453 priority Critical patent/WO2024049442A1/en
Publication of WO2024049442A1 publication Critical patent/WO2024049442A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6225Fixed service order, e.g. Round Robin
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/43Assembling or disassembling of packets, e.g. segmentation and reassembly [SAR]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9057Arrangements for supporting packet reassembly or resequencing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion

Definitions

  • the present disclosure is generally related to network communications, and specifically to an efficient mechanism to process qualitative packets in a router.
  • packets may be dropped when there is not enough buffer in the routers due to network congestion or when a packet error occurs during transmission.
  • This practice causes re-transmissions of the packets under reliable transmission protocols, which in-turn produces unwanted delay, reduces throughput, and wastes network resources.
  • QC qualitative communication seeks to avoid dropping an entire packet by breaking the packet into smaller logical units (referred to as chunks).
  • a network node e.g., a router
  • packet scrubbing or packet washing on the packet with the granularity of chunks (i.e., drop one or more chunks from the packet) depending on congestion level, policy, and chunk meta-data.
  • a first aspect relates to a computer-implemented method for processing packets.
  • the method includes receiving a packet having a packet payload comprising a plurality of chunks; inserting chunks from the plurality of chunks into separate buffer queues of an output port; pulling one or more of the chunks from the buffer queues to form an outgoing packet based on a congestion level of the network node; and transmitting the outgoing packet through the output port.
  • inserting chunks from the plurality of chunks into separate buffer queues of an output port includes determining a first number of chunks in the plurality of chunks; and inserting one chunk into each of the buffer queues when the first number of chunks equals a second number of queues.
  • inserting chunks from the plurality of chunks into separate buffer queues of an output port includes determining a first number of chunks in the plurality of chunks; and inserting one or more dummy chunks into one or more buffer queues when the first number of chunks is less than a second number of queues.
  • inserting chunks from the plurality of chunks into separate buffer queues of an output port includes determining a first number of chunks in the plurality of chunks; and inserting multiple chunks of the plurality of chunks into one or more buffer queues when the first number of chunks is greater than a second number of queues.
  • the method further includes determining a priority associated with each of the chunks; and inserting the chunks in separate buffer queues according to the priority associated with each of the chunks.
  • the method further includes pulling the chunks from the separate buffer queues using a round robin pulling policy.
  • the method further includes pulling the chunks from the separate buffer queues based on a priority level associated with a buffer queue.
  • the method further includes dropping a chunk from a buffer queue in response to the congestion level exceeding a first threshold.
  • the method further includes dropping multiple chunks from one or more of the separate buffer queues in response to the congestion level exceeding a second threshold.
  • the method further includes pulling all chunks having a same packet identifier from the separate buffer queues.
  • a second aspect relates to a network node comprising network communication means, a data storage means, and a processing means, the network node specially configured to perform the first aspect or any preceding implementation form of the first aspect.
  • a third aspect relates to a computer program product stored on a tangible medium, the computer program product comprising instructions that when executed by a processor of an apparatus causes the apparatus to perform the first aspect or any preceding implementation form of the first aspect.
  • any one of the foregoing embodiments may be combined with any one or more of the other foregoing embodiments to create a new embodiment within the scope of the present disclosure.
  • FIG. 1 is a schematic diagram illustrating a communication network for transmitting a data packet.
  • FIG. 2 is a schematic diagram illustrating a data packet.
  • FIG. 3 is a schematic diagram illustrating a data packet that includes a plurality of chunks in accordance with an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram illustrating packet washing in accordance with an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram illustrating a router architecture in accordance with an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram illustrating a buffer queue mechanism in accordance with an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram illustrating a buffer queue mechanism in accordance with an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram illustrating a buffer queue mechanism in accordance with an embodiment of the present disclosure.
  • FIG. 9 is a flowchart illustrating a method for processing qualitative packets in accordance with an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a network node in accordance with an embodiment of the present disclosure.
  • a qualitative packet as referenced herein is a packet having a payload that is broken into a plurality of chunks.
  • the chunks can have different priorities or may all have the same priority.
  • a forwarding node i.e., a network node such as a router or switch
  • the chunks can be also disposable, especially low priority chunks, and not require retransmission upon a chunk being dropped or lost; or the chunks can require reliable transmission and require retransmission upon a loss.
  • the retransmission can be of a chunk identical to the chunk being dropped or of a chunk being part of another packet, or of a chunk carrying similar information as the chunk being dropped, as in the case of encoded chunks where the chunks do not have to be identical as long as they allow proper decoding of the information at the receiver.
  • FIG. 1 is a schematic diagram illustrating a process 100 for communicating data between a source node 110 and a destination node 120 over a communication network 130.
  • the source node 110 and destination node 120 can be any type of electronic device capable of communicating over the communication network 130 such as, but not limited to, a mobile communication device, an Internet of things (loT) device, a personal computer, a server, a router, a mainframe, a database, or any other type of user or network device.
  • the source node 110 can be a media server
  • the destination node 120 can be a mobile device that receives media content from the source node 110.
  • the source node 110 executes one or more programs/applications (APP) 102.
  • the application 102 can be any type of software application.
  • the application 102 produces or generates data 104.
  • Data 104 can be any type of data depending on the functions of the application 102.
  • the data 104 can be data that is automatically produced and pushed by the source node 110 to the destination node 120.
  • the data 104 can be data that is specifically requested from the source node 110 by the destination node 120.
  • the application 102 on the source node 110 uses an application programming interface (API) to communicate the data 104 to a transport layer the appropriate application 116 on the destination node 120.
  • API application programming interface
  • the transport layer 106 bundles/organizes the data into data packets 112 according to a specific protocol (i.e., packetization).
  • a specific protocol i.e., packetization
  • the transport layer 106 may use various communication protocols such as, but not limited to, Transmission Control Protocol/Internet protocol (TCP/IP) for providing host-to-host communication services such as connection-oriented communication, reliability, flow control, and multiplexing.
  • TCP/IP Transmission Control Protocol/Internet protocol
  • the data packets 112 are transferred to a network layer 108 of the source node 110.
  • the network layer 108 is responsible for packet forwarding including routing of the data packets 112 through one or more network nodes 114 (e.g., routers or switches) of the communication network 130.
  • the communication network 130 can comprise multiple interconnected networks including a local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a wireless or mobile network, and an inter-network (e.g., the Internet).
  • LAN local area network
  • MAN metropolitan area network
  • WAN wide area network
  • wireless or mobile network e.g., the Internet
  • FIG. 2 is a schematic diagram illustrating an example of a data packet 200 that can be communicated over the communication network 130 of FIG. 1.
  • the data packet 200 is similar to the data packet 112 of FIG. 1.
  • the data packet 200 includes an IP header (IP HDR) 202 and a payload 204.
  • IP HDR 202 contains routing information and information (e.g., an identification tag) that enables the data packets 200 to be reassembled after transmission to produce the data 104.
  • IP networks such as the Internet, are normally not secure, so the data packet 200 can be lost, can be delayed, and can arrive in the wrong order.
  • the identification tag helps to identify the data packet 200 and to reassemble the data 104 back to its original form.
  • the IP HDR 202 can also contain a checksum and a time to live (TTL) value.
  • the checksum is used for error detection and correction during packet transmission.
  • the TTL value is used to reduce redundant packets in the communication network 130.
  • the payload 204 of the data packet 200 contains the actual data being carried by the data packet 200.
  • QoS quality of service
  • the QoS function ensures that data packets 200 that are marked with higher priority are scheduled earlier than data packets 200 that are marked with lower or normal priorities. As a consequence, if outgoing buffers or queues of a network node 114 are full, the lower priority data packets 200 get completely dropped. Any error, due to link congestion or intermittent packet loss in the communication network 130, can trigger re-transmission of the data packets 200. Re-transmission of the data packets 200 wastes network resources, reduces the overall throughput of the connection, and causes longer latency for the packet delivery.
  • QoS quality of service
  • QC a recently new network service referred to as QC seeks to avoid dropping an entire packet by breaking the packet into smaller logical units (referred to as chunks).
  • a network node e.g., a router
  • the packet wash operation is a function performed by a network node 114 to modify a size of a data packet by removing one or more chunks from a payload of a qualitative packet.
  • the packet wash operation can add or restore one or more chunks to a packet payload (e.g., a chunk that was previously removed from an earlier packet) while the data packet is en route from a source node to a destination node.
  • FIG. 3 is a schematic diagram illustrating a data packet 300 that supports a packet wash operation in accordance with an embodiment of the present disclosure.
  • the data packet 300 includes the IP HDR 202, a packet wash (PW) specification 206, and the payload 204.
  • the source node 110 creates the PW specification 206.
  • the PW specification 206 can identify the packet as being eligible for PW and implicitly or explicitly describe the significance of the bytes or data payload portions of the payload 204.
  • the source node 110 breaks the data into a plurality of data payload portions (i.e., chunks of data).
  • the data payload for the data packet 300 is broken into chunk 4 (C4) 208, chunk 3 (C3) 210, chunk 2 (C2) 212, and chunk 1 (Cl) 214 based on the PW specification 206.
  • the number of chunks that the payload 204 has may vary depending on the level of granularity applied to the significance of the bytes.
  • Each chunk may be associated with particular attributes such as, but not limited to, a priority level or significance value of the chunk.
  • a binary value e.g., 0 or 1 can be assigned to each chunk indicating whether the chunk is significant/required or insignificant/disposable.
  • each chunk can be assigned a value within a range (e.g., 0-9) to provide greater granularity of the significance or priority level of a chunk of data.
  • the chunks priority can be indicated by their arrangement in the packet (e.g., earlier chunks have higher priority than later chunks, or reverse).
  • the chunks of data may vary in size (i.e., one chunk may contain more information than another chunk).
  • the network node 114 performs the packet wash operation by dropping lower-priority chunks from the payload 204 of the data packet 300 according to the information in the PW specification 206 while retaining as much information as possible based on the current network condition.
  • the source node 110 could rearrange the bits in the payload 204 such that the first consecutive chunks contain the base layer that encodes the basic video quality, while the next consecutive chunks contain the enhancement layers (e.g., higher signal-to-noise ratio, higher resolution, and higher frame rate). If congestion or other satisfying network condition occurs, the network node 114 can intentionally remove as many of the chunks containing the enhancement layers as necessary without having to request that the data packet 300 be retransmitted by the source node 110. Additionally, the chunks in the packet payload 204 may have a certain relationship among each other.
  • the enhancement layers e.g., higher signal-to-noise ratio, higher resolution, and higher frame rate
  • a network coding scheme can be applied where the chunks are linearly coded from the original chunks in the payload and are linearly independent from each other.
  • dropping any of the linearly coded chunks and keeping the rest of the chunks would still enable the receiver to recover the original data contained in the packet payload.
  • FIG. 4 is a schematic diagram illustrating a packet wash operation in accordance with an embodiment of the present disclosure.
  • the application 102 on the source node 110 creates the packet wash operation specification that specifies that the data for the packet can be split into four chunks of data (C4, C3, C2, and Cl).
  • the packet wash operation specification can also provide attributes and conditions associated with each of the chunks of data.
  • the attributes indicate the level of significance for each of the chunks of data.
  • the attributes can also indicate the type of information contained in each of the chunks of data.
  • the conditions specify when the packet wash operation can occur.
  • the conditions may also specify when packet retransmission should be requested.
  • the data and the packet wash operation specification are passed to the transport layer 106 for packetization.
  • the transport layer 106 creates a data packet based on the packet wash operation specification.
  • the data packet may include a flag or a packet wash operation field to indicate that the data packet supports the packet washes operation.
  • the inclusion of a packet wash operation specification in the data packet indicates that the data packet supports the packet wash operation.
  • the packet wash supported data packet (also referred to herein as a qualitative packet) is passed to the network layer 106, which transmits the qualitative packet to the destination node 120 over the communication network 130.
  • the intermediate routers e.g., network node 114 in FIG. 1 and FIG.
  • the network node 114 on the communication network 130 receives the qualitative packet, if the network conditions are normal, the network node 114 will forward the qualitative packet just like a normal data packet (i.e., a non-qualitative packet). However, if network conditions at the network node 114 do not enable the qualitative packet to be forwarded without modification, the network node 114 will perform a packet wash operation based on the packet wash operation specification of the data packet if the conditions for performing the packet wash operation are met. Alternatively, in some embodiments, even when the network conditions are normal, the network node 114 may perform the packet wash operation based on one or more other conditions.
  • SLA Service Level Agreement
  • the network node 114 removes the chunk 4 (C4) 208 from the data packet and forwards the remaining data packet towards the destination node 120.
  • C4 chunk 4
  • a new washed data packet may be generated with the remaining chunks of the data packet and the original data packet may be discarded.
  • one or more chunks of data are removed from the original data packet, and the remaining chunks of the original data packet are forwarded.
  • the network node 114 will drop the data packet and send a request to the source node 110 for retransmission of the data packet.
  • the data packet arrives at the destination node 120, the data packet is depacketized, and the packet wash operation specification and the data are passed to the application 116.
  • the application 116 can utilize the packet wash operation specification to determine if the data has been packet washed and the type of data that was removed. The application 116 may provide a user some indication or notification regarding the data that was not received.
  • the data that is received at the destination node 120 is not required to be exactly the same as what is sent by the source node 110.
  • the received partial or degraded data is still useful to the application 116.
  • the video can still be displayed in basic form.
  • the discarded data can be recovered from data received from prior data packets. For example, if the application 116 determines that the discarded data corresponds to a background color or other item (e.g., color of a car) or corresponds to an image that was previously received (e.g., a page of slide presentation that has not changed since the last packet), the application 116 can recover the discarded data by using the data from previous packets.
  • the data that is received may be repaired and recovered prior to being rendered.
  • the disclosed embodiments enable an efficient mechanism to process qualitative packets in a network node.
  • the network node allocates a number of buffer queues for each outgoing port of a network node.
  • the network node uses the buffer queues to separate the chunks of a packet into different queues.
  • the network node can pull and discard a chunk from a corresponding queue without having to copy and rewrite data to the buffer, thereby increasing the efficiency of the network node.
  • FIG. 5 is a schematic diagram illustrating a routing architecture of a network node 500 in accordance with an embodiment of the present disclosure.
  • the network node 500 may be an example of the network node 114 in FIG. 4.
  • the network node 500 includes a plurality of input ports 502, a switching fabric 504, and a plurality of output ports 506.
  • the plurality of input ports 502 perform the physical layer function of terminating an incoming physical link at the network node 500, and perform link-layer functions needed to interoperate with the link layer at the other side of the incoming link.
  • a lookup function is also performed at the input ports 502 where a forwarding table is consulted here to determine an output port 506 to which an arriving packet will be forwarded via the switching fabric 504.
  • the switching fabric 504 may be a combination of hardware and software that controls traffic to and from the network node 500 with the use of multiple switches (i.e., data comes in one port and out on another port).
  • a routing processor (not depicted) may be coupled to the switching fabric 504 for executing routing protocols, computing the forwarding table, and perform other routing functions.
  • Each output port 506 has an output buffer 508. The plurality of output ports 506 store packets received from the switching fabric 504 in their corresponding output buffer 508 and transmits these packets from the output buffers 508.
  • the network node 500 allocates a number of buffer queues 510 for each of the output buffers 508.
  • FIG. 5 depicts three buffer queues 510 allocated in each of the output buffers 508.
  • the network node 500 can have input queueing (where the buffers are attached to the incoming ports/links), output queuing (where the buffers are attached to the outgoing ports/links), or combined Input/Output queueing (CIOQ) with both input and output buffers.
  • input queueing where the buffers are attached to the incoming ports/links
  • output queuing where the buffers are attached to the outgoing ports/links
  • CIOQ Input/Output queueing
  • multiple output buffers 508 could be attached to a link or port to implement priority queueing where the network node 500 is configured to pull chunks of a packet from the buffer with highest priority first, or implement round robin queuing where the network node 500 is configured to pull chunks of a packet from each of the queues in an pre-specified order and independent of the number of packets or chunks in the queues (e.g., pulling chunks of a packet from buffer 1, buffer 2, up to buffer n, then back to buffer 1 again.)
  • multiple steps may be performed in parallel. For example, chunks of a packet may be pulled from a high priority buffer queue for writing to an outgoing packet while chunks from a low priority buffer queue are pulled and dropped.
  • the disclosed embodiments are not limited by the number of buffer queues 510 allocated for an output buffer 508. Additionally, although FIG. 5 illustrates three buffer queues 510 allocated in each of the output buffers 508, in some embodiments, there may be a different number of buffer queues 510 allocated in different output buffers 508. As will be further described, the network node 500 uses the buffer queues 510 to separate the chunks of a packet so one or more chunks that have been written to an output buffer 510 can be efficiently dropped if needed during a packet wash operation.
  • the switching fabric 504 will hard-wire the bit stream to an output buffer 508.
  • a "distinguisher" function may be added to the hardware to distinguish chunks and hand-off the wire-connection between the switching fabric 504 and the output buffer 508 for writing/inserting chunks in a particular buffer queue 510 of the output buffer 508.
  • FIG. 6 is a schematic diagram illustrating a buffer queue mechanism 600 in accordance with an embodiment of the present disclosure.
  • the buffer queue mechanism 600 can be implemented by a network node such as network node 114 in FIG. 4.
  • the network node allocates three buffer queues QI, Q2, and Q3.
  • the buffer queues QI, Q2, and Q3 are configured as fir st-in/fir st-out (FIFO) queues.
  • FIFO fir st-in/fir st-out
  • the number of buffer queues may vary in different embodiments.
  • Buffer queues QI, Q2, and Q3 may be buffer queues for an input port/link or an output port/link.
  • the network node receives a packet having a payload comprising three chunks (chunk 1, chunk 2, and chunk 3).
  • the number of chunks in a packet payload may vary. Additionally, the number of chunks received in a packet may be less than the original number of chunks in a packet payload if another network node previously performed packet washing on the packet (i.e., dropped one or more chunks prior to the network node receiving the packet).
  • all chunks within a packet are treated the same (i.e., all chunks have the same priority and any chunk can be dropped).
  • the network node inserts/writes chunks in the order as contained in the packet in a corresponding buffer queue (e.g., chunk 1 in QI, chunk 2 in Q2, and chunk 3 in Q3). The new chunks are placed at the end or top of the buffer queue.
  • the network node pulls chunks of a packet using a round robin pulling policy ( i.e., pull from buffer QI first, Q2 second, Q3 last, and then back QI, ).
  • the disclosed embodiment when writing a packet to the wire (i.e., output port/link), if there is no congestion, the disclosed embodiment pulls each chunk (in its original order) to be written and transmitted on the wire. By pulling the chunks in a round robin manner, the recomposed packet is identical to the original packet.
  • chunk 1, chunk 2, and chunk 3 may be placed in any of the three buffer queues (e.g., chunk 3 in QI, chunk 2 in Q2, and chunk 1 in Q3).
  • each chunk may be associated with a priority level or some other indicator that identifies if a chunk can be dropped.
  • the network node determines the priority of the chunks in the packet and inserts the chunks in the appropriate buffer queues.
  • the network node is configured to always pull chunks from QI first, Q2 second, and Q3 last, then the highest priority chunk or chunks are placed in buffer queue QI, the second highest priority chunks are placed in buffer queue Q2, and the lowest priority chunks are placed in buffer queue Q3.
  • the buffer queues are mapped to DiffServ Code Points (DSCP).
  • DSCP is a means of classifying and managing network traffic and of providing quality of service (QoS) in modern Layer 3 IP networks.
  • DSCP uses the DS field in IPv4 and IPv6 packet headers to carry one of 64 distinct DSCP values for the purpose of packet classification.
  • the disclosed embodiments can be combined with DSCP to ensure that packets are treated differently according to their DSCP. For instance, expedited forwarding (EF) packets (identified by DSCP value 46) can be buffered in one or more separate queues from Best Effort (BE) packets (identified by DSCP value 0).
  • BE Best Effort
  • the network node can then be configured to pull from the buffer queues for EF packets before pulling from the buffer queues for BE packets.
  • the packet can also include packet header information and other non-payload data (e.g., PW specification 206 in FIG. 3).
  • packet header information and other non-payload data may be placed in the same buffer queue as the highest priority chunk or chunks and are pulled along with the corresponding chunks of the packet from the buffer queue when the network is ready to transmit the packet.
  • the packet header information and other non-payload data may be stored by the network node in another memory location or in a separate buffer.
  • the network node pulls all the chunks of a packet from the bottom/beginning of the buffer queues (i.e., the chunks of the oldest packet in the buffer queues), generates the packet having a payload of all the pulled chunks, and transmits the packet towards the intended destination of the packet.
  • the packet is forwarded in its entirety.
  • the network node when there is network congestion that causes the data in the buffer queues to exceed a first threshold amount (e.g., threshold 1 in FIG. 6), the network node is configured to drop a chunk from the packet. Other conditions for dropping one or more chunks as previously described are possible.
  • the first threshold is not limited to any particular amount. For example, as shown in FIG. 6, because the threshold 1 has been met, and assuming chunk 3 is a droppable chunk, the network node pulls the bottom chunks from each of the buffer queues (i.e., chunk 1, chunk 2, and chunk 3 of a packet) and drops chunk 3 from the packet. The packet is then forwarded with just chunk 1 and chunk 2.
  • the network node By dropping a chunk, the network node increases processing capacity to help alleviate the network at the network node. For example, the effective processing capacity is increased by 50% as 3 packets can be transmitted after the first threshold is met, versus only 2 packets before the first threshold is reached. As a consequence, the congestion in the router is reduced faster as the switch send packets out (and empty the output buffers) at a rate that is increased by 50% (3 packets versus 2 packets). If the next congestion threshold is met, only the first chunk + header is transmitted, and the 2nd and 3rd chunks are dropped from their respective buffer. The transmission rate after that threshold is met is then 3 packets transmitted when only 1 packet was transmitted in the same time before the first threshold.
  • additional threshold levels may be configured at the network node. For example, when the congestion reaches a second threshold (e.g., threshold 2 in FIG. 6), then the network node will drop two chunks from the packet (e.g., drop both chunk 2 and chunks 3 assuming that these chunks can be dropped) and only the first chunk and header/non-payload information is transmitted in a packet. More thresholds can be added, until the congestion reaches a threshold (e.g., when the buffer queue capacity is full) such that all chunks are dropped and only the header or other non-payload data is forwarded, or alternatively, the entire packet is dropped (chunks and header).
  • a threshold e.g., when the buffer queue capacity is full
  • FIG. 7 is a schematic diagram illustrating a buffer queue mechanism 700 in accordance with an embodiment of the present disclosure.
  • the buffer queue mechanism 700 is similar to the buffer queue mechanism 600 described in FIG. 6, except that in this embodiment, there are more buffer queues (buffer queues QI, Q2, and Q3) than the number of chunks (chunk 1 and chunk 2) that is received in a packet. This situation may occur if the original packet had less chunks than the number of buffer queues allocated for a port at the receiving network node or if the packet was washed (e.g., a chunk was dropped) by another network node prior to the current network node receiving the packet.
  • the network node writes chunk 1 of the packet to buffer queue QI and writes chunk 2 to buffer queue Q2 (assuming chunk 1 has a higher or same priority as chunk 2).
  • the network node writes a dummy (D) chunk to buffer queue Q3 to maintain packet alignment of the chunks in the buffer queues.
  • the dummy chunk may contain metadata or some type of identifier that indicates to the network node that the chunk is to be discarded when the chunk is pulled from buffer queue Q3. Additional dummy junks may be used if there are multiple buffer queues that do not contain a chunk for a particular packet.
  • a distinguisher function is built into the hardware to distinguish chunks of the same packets based on a label or packet identifier that is associated with each chunk of packet.
  • a buffer queue may be able to store more than one chunk of a packet per queue.
  • the packet identifier associated with the chunks enables the network node to identify which packet a chunk belongs to so as to avoid asynchronicity (i.e., chunks of different packets being pulled) if a chunk is lost from one of the buffer queues.
  • a reset mechanism can be called at some periodic interval (to be specified) to ensure that the recomposed packets only include chunks from the same initial packet.
  • FIG. 8 is a schematic diagram illustrating a buffer queue mechanism 800 in accordance with an embodiment of the present disclosure.
  • the network node receives a packet having more chunks than the number of buffer queues allocated for the particular port/link. For example, as shown in FIG. 8, the network node receives a packet having a four chunk payload (chunk 1, chunk 2, chunk 3, and chunk 4), but only has three buffer queues (buffer queues QI, Q2, and Q3).
  • buffer queues QI, Q2, and Q3 buffer queues
  • the chunks arrive arranged in order of priority (i.e., chunk 1 has the highest priority and chunk 4 has the lowest priority).
  • the network node can be configured to determine the priority of each of the chunks in a packet based on a priority assigned to each of the chunks.
  • the network node is configured to write the lowest priority chunk (in this example, chunk 4) to the last or lowest priority buffer queue Q3, the second lowest priority chunk (chunk 3) to buffer queue Q2, and both chunk 1 and chunk 2 of the packet are written to buffer queue QI.
  • the size allocated to the buffer queue QI may be larger than the other buffer queues to accommodate for more chunks.
  • the network node pulls all the chunks of a packet from buffer queue QI first, then buffer queue Q2, and finally buffer queue Q3. If there is congestion, the network node can drop the lowest priority chunk in buffer queue Q3, and if the congestion exceeds a second threshold, the network node can drop the chunk in buffer queue Q2 next, and so on.
  • the network node can configure each buffer queue to hold floor(number of chunks/number of queues) chunks and the first queue to hold the remainder of the packet.
  • floor(number of chunks/number of queues) chunks As an example, if a packet includes 7 chunks and the network node has 3 buffer queues, then each buffer queue will hold floor(7/3), which is 2 chunks each, and the first queue QI will hold the remainder (i.e., 3 total). As described above, the 2 lowest priority chunks are written to buffer queue Q3, the next 2 lowest priority chunks are written to buffer queue Q2, and the remaining 3 chunks of the packet are written to buffer queue QI.
  • FIG. 9 is a flowchart illustrating a method 900 for processing qualitative packets in accordance with an embodiment of the present disclosure.
  • the method 900 can be performed by a network node such as router or switch (e.g., network node 114 in FIG. 4).
  • the method 900 includes the network node receiving, at step 902, an incoming packet having a packet payload comprising a plurality of chunks.
  • the network node inserts chunks in the plurality of chunks in separate buffer queues of an output port. As described above, the chunks are inserted at the end of a buffer queue.
  • the buffer queues hold the chunks until a line of the output port is available to transmit data.
  • an output port may have multiple lines and there may be a different set of buffer queues for each line of the output port. Alternatively, there may be a single set of buffer queues for all lines of an output port.
  • the network node determines a number of chunks in the plurality of chunks, and inserts one chunk into each of the buffer queues when the number of chunks equals the number of queues for the output port (e.g., 3 chunks and 3 buffer queues, each buffer queue holding one chunk of the packet). In an embodiment, when the number of chunks is less than the number of queues for the output port, the network node inserts one or more dummy chunks into one or more buffer queues in insert. In another embodiment, when the number of chunks is greater than the number of queues for the output port, the network node inserts multiple chunks of a packet into one or more buffer queues (i.e., a single buffer queue can hold multiple chunks of a packet). Alternatively, the network node can determine a priority associated with each of the chunks and insert the chunks in separate buffer queues according to the priority associated with each of the chunks.
  • the network node pulls all chunks of a packet from a beginning of the buffer queues (referred to as a first set of chunks) when a line of the output port is available to transmit data.
  • the network node may employ a round robin pulling policy and pull one chunk from each of the buffer queues per cycle until all the chunks of a packet are removed from the separate buffer queues.
  • the network node may be configured to pull one or more of the chunks from the buffer queues based on a priority level associated with a buffer queue or a priority level associated with chunk stored in the buffer queue.
  • the network node determination may be based on a congestion level of the network node. For instance, if the congestion level of the network node exceeds a first threshold, the network node may drop a chunk of the packet. Similarly, the network node may drop additional chunks pulled from the buffer queues in response to the congestion level exceeding a second threshold, and so on.
  • the network node determination may be based on other conditions besides a congestion level of the network node.
  • the network node may pull chunks from a buffer queue based on a label or other packet identifier associated with the chunk. For instance, in some embodiments, a buffer queue may hold multiple chunks of a packet. During the pulling process, the network node pulls all chunks having a same packet identifier from a buffer queue. The network node, at step 910, transmits an outgoing packet that includes the second set of chunks over the line of the output port.
  • FIG. 10 is a schematic architecture diagram of an apparatus 1000 according to an embodiment of the disclosure.
  • the apparatus 1000 is suitable for implementing the disclosed embodiments as described herein.
  • the apparatus 1000 can be deployed as a router, a switch, and/or other network nodes within a network.
  • the apparatus 1000 comprises receiver units (Rx) 1020 or receiving means for receiving data via ingress/input ports 1010; a processor 1030, logic unit, central processing unit (CPU) or other processing means to process instructions; transmitter units (TX) 1040 or transmitting means for transmitting via data egress/output ports 1050; and a memory 1060 or data storing means for storing the instructions and various data.
  • Rx receiver units
  • CPU central processing unit
  • TX transmitter units
  • memory 1060 data storing means for storing the instructions and various data.
  • the processor 1030 may be implemented as one or more CPU chips, cores (e.g., as a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and digital signal processors (DSPs).
  • the processor 1030 is communicatively coupled via a system bus with the ingress ports 1010, RX 1020, TX 1040, egress ports 1050, and memory 1060.
  • the processor 1030 can be configured to execute instructions stored in memory 1060.
  • the processor 1030 provides a means for determining, creating, indicating, performing, providing, or any other action corresponding to the claims when the appropriate instruction is executed by the processor 1030.
  • the memory 1060 can be any type of memory or component capable of storing data and/or instructions.
  • the memory 1060 may be volatile and/or non-volatile memory such as read-only memory (ROM), random access memory (RAM), ternary content-addressable memory (TCAM), and/or static random-access memory (SRAM).
  • the memory 1060 can also include one or more disks, tape drives, and solid-state drives and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution.
  • the memory 1060 can also include one or more disks, tape drives, and solid-state drives and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution.
  • the memory may be volatile and/or non-volatile memory such as read-only memory (ROM), random access memory (RAM), ternary content-addressable memory (TCAM), and/or static random-access memory
  • 1060 can be memory that is integrated with the processor 1030.
  • the memory 1060 stores a qualitative packet processing module 1070.
  • the qualitative packet processing module 1070 includes data and executable instructions for implementing the disclosed embodiments.
  • the qualitative packet processing module 1070 can include instructions for implementing the method 900 in FIG. 9 as described herein.
  • the inclusion of the qualitative packet processing module 1070 substantially improves the functionality of the apparatus 1000 by enabling QC and New IP within the current existing router architecture.
  • the disclosed embodiments enable an efficient mechanism to process qualitative packets in a network node and provide several improvements over existing technology such as, but not limited to, reducing the number of read/write operations (e.g., read each chunk once and pass it on to the link layer, as opposed to having to read an entire packet), improving efficiency by performing certain steps in parallel (e.g., dropping a chunk from a low priority queue while at the same time pulling chunks from the high priority queue), and eliminate the need to rebuffer chunks of a packet (e.g., currently, if the chunk that is dropped is not from the tail of the packet, then the network node has to write part of the packets to a buffer to remove a chunk in the middle).
  • reducing the number of read/write operations e.g., read each chunk once and pass it on to the link layer, as opposed to having to read an entire packet
  • improving efficiency e.g., dropping a chunk from a low priority queue while at the same time pulling chunks from the high priority queue
  • the disclosed embodiments may be a system, an apparatus, a method, and/or a computer program product at any possible technical detail level of integration
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
  • the computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.

Abstract

A method, perform by a network node, for efficiently processing qualitative packets. The method includes the network node receiving an incoming packet having a packet payload comprising a plurality of chunks. The network node inserts chunks from the plurality of chunks in separate buffer queues of an output port at the end of the buffer queues. The network node pulls a first set of chunks from a beginning of the separate buffer queues when a line of the output port is available to transmit data, wherein the first set of chunks comprises all chunks of a packet. The network node drops, based on a network node determination, one or more of the chunks from the first set of chunks to form a second set of chunks. The network node transmits an outgoing packet comprising the second set of chunks over the line of the output port.

Description

An Efficient Mechanism to Process Qualitative Packets in A Router
TECHNICAL FIELD
[0001] The present disclosure is generally related to network communications, and specifically to an efficient mechanism to process qualitative packets in a router.
BACKGROUND
[0002] In the current Internet, packets may be dropped when there is not enough buffer in the routers due to network congestion or when a packet error occurs during transmission. This practice causes re-transmissions of the packets under reliable transmission protocols, which in-turn produces unwanted delay, reduces throughput, and wastes network resources. Recently, a new network service referred to as qualitative communication (QC) seeks to avoid dropping an entire packet by breaking the packet into smaller logical units (referred to as chunks). For instance, using QC, when there is network congestion, a network node (e.g., a router) could perform packet scrubbing or packet washing on the packet with the granularity of chunks (i.e., drop one or more chunks from the packet) depending on congestion level, policy, and chunk meta-data.
SUMMARY
[0003] A first aspect relates to a computer-implemented method for processing packets. The method includes receiving a packet having a packet payload comprising a plurality of chunks; inserting chunks from the plurality of chunks into separate buffer queues of an output port; pulling one or more of the chunks from the buffer queues to form an outgoing packet based on a congestion level of the network node; and transmitting the outgoing packet through the output port. [0004] In a first implementation form of the computer-implemented method according to the first aspect, inserting chunks from the plurality of chunks into separate buffer queues of an output port includes determining a first number of chunks in the plurality of chunks; and inserting one chunk into each of the buffer queues when the first number of chunks equals a second number of queues.
[0005] In a second implementation form of the computer-implemented method according to the first aspect or any preceding implementation form of the first aspect, inserting chunks from the plurality of chunks into separate buffer queues of an output port includes determining a first number of chunks in the plurality of chunks; and inserting one or more dummy chunks into one or more buffer queues when the first number of chunks is less than a second number of queues.
[0006] In a third implementation form of the computer-implemented method according to the first aspect or any preceding implementation form of the first aspect, inserting chunks from the plurality of chunks into separate buffer queues of an output port includes determining a first number of chunks in the plurality of chunks; and inserting multiple chunks of the plurality of chunks into one or more buffer queues when the first number of chunks is greater than a second number of queues.
[0007] In a fourth implementation form of the computer-implemented method according to the first aspect or any preceding implementation form of the first aspect, the method further includes determining a priority associated with each of the chunks; and inserting the chunks in separate buffer queues according to the priority associated with each of the chunks.
[0008] In a fifth implementation form of the computer-implemented method according to the first aspect or any preceding implementation form of the first aspect, the method further includes pulling the chunks from the separate buffer queues using a round robin pulling policy. [0009] In a sixth implementation form of the computer-implemented method according to the first aspect or any preceding implementation form of the first aspect, the method further includes pulling the chunks from the separate buffer queues based on a priority level associated with a buffer queue.
[0010] In a seventh implementation form of the computer-implemented method according to the first aspect or any preceding implementation form of the first aspect, the method further includes dropping a chunk from a buffer queue in response to the congestion level exceeding a first threshold.
[0011] In an eighth implementation form of the computer-implemented method according to the first aspect or any preceding implementation form of the first aspect, the method further includes dropping multiple chunks from one or more of the separate buffer queues in response to the congestion level exceeding a second threshold.
[0012] In a ninth implementation form of the computer-implemented method according to the first aspect or any preceding implementation form of the first aspect, the method further includes pulling all chunks having a same packet identifier from the separate buffer queues.
[0013] A second aspect relates to a network node comprising network communication means, a data storage means, and a processing means, the network node specially configured to perform the first aspect or any preceding implementation form of the first aspect.
[0014] A third aspect relates to a computer program product stored on a tangible medium, the computer program product comprising instructions that when executed by a processor of an apparatus causes the apparatus to perform the first aspect or any preceding implementation form of the first aspect.
[0015] For the purpose of clarity, any one of the foregoing embodiments may be combined with any one or more of the other foregoing embodiments to create a new embodiment within the scope of the present disclosure.
[0016] These and other features, and the advantages thereof, will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
[0018] FIG. 1 is a schematic diagram illustrating a communication network for transmitting a data packet.
[0019] FIG. 2 is a schematic diagram illustrating a data packet.
[0020] FIG. 3 is a schematic diagram illustrating a data packet that includes a plurality of chunks in accordance with an embodiment of the present disclosure.
[0021] FIG. 4 is a schematic diagram illustrating packet washing in accordance with an embodiment of the present disclosure.
[0022] FIG. 5 is a schematic diagram illustrating a router architecture in accordance with an embodiment of the present disclosure.
[0023] FIG. 6 is a schematic diagram illustrating a buffer queue mechanism in accordance with an embodiment of the present disclosure.
[0024] FIG. 7 is a schematic diagram illustrating a buffer queue mechanism in accordance with an embodiment of the present disclosure. [0025] FIG. 8 is a schematic diagram illustrating a buffer queue mechanism in accordance with an embodiment of the present disclosure.
[0026] FIG. 9 is a flowchart illustrating a method for processing qualitative packets in accordance with an embodiment of the present disclosure.
[0027] FIG. 10 is a schematic diagram of a network node in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[0028] It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems, computer program product, and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
[0029] The present disclosure provides various embodiments for efficiently processing qualitative packets in a router. A qualitative packet as referenced herein is a packet having a payload that is broken into a plurality of chunks. The chunks can have different priorities or may all have the same priority. Upon packet error or network congestion (e.g., when the rate of ingress traffic becomes larger than the amounts that can be forwarded on the output interface, congestion is observed), a forwarding node (i.e., a network node such as a router or switch) selectively removes one or more chunks from the payload based on the relationship among the chunks or the individual significance level of each chunk. The chunks can be also disposable, especially low priority chunks, and not require retransmission upon a chunk being dropped or lost; or the chunks can require reliable transmission and require retransmission upon a loss. In the latter case, the retransmission can be of a chunk identical to the chunk being dropped or of a chunk being part of another packet, or of a chunk carrying similar information as the chunk being dropped, as in the case of encoded chunks where the chunks do not have to be identical as long as they allow proper decoding of the information at the receiver.
[0030] In order for QC to succeed, network nodes need to support some type of chunkdropping mechanism. One issue with current routers is that it is difficult to remove chunks from a packet that is written into a buffer (e.g., a buffer of an output port). For instance, once a packet is written into a buffer, to remove a particular chunk, all the data in the buffer would need to be copied, the particular chunk would then be removed, and the remaining data is written back into the buffer. This process is impractical to implement. Thus, the disclosed embodiments seek to address the above issue by implementing an efficient mechanism to process qualitative packets in a router that achieves QC without modifying a typical router architecture.
[0031] FIG. 1 is a schematic diagram illustrating a process 100 for communicating data between a source node 110 and a destination node 120 over a communication network 130. The source node 110 and destination node 120 can be any type of electronic device capable of communicating over the communication network 130 such as, but not limited to, a mobile communication device, an Internet of things (loT) device, a personal computer, a server, a router, a mainframe, a database, or any other type of user or network device. For example, the source node 110 can be a media server, and the destination node 120 can be a mobile device that receives media content from the source node 110.
[0032] In the depicted embodiment, the source node 110 executes one or more programs/applications (APP) 102. The application 102 can be any type of software application. The application 102 produces or generates data 104. Data 104 can be any type of data depending on the functions of the application 102. The data 104 can be data that is automatically produced and pushed by the source node 110 to the destination node 120. Alternatively, the data 104 can be data that is specifically requested from the source node 110 by the destination node 120. To communicate the data 104 to the destination node 120, the application 102 on the source node 110 uses an application programming interface (API) to communicate the data 104 to a transport layer
Figure imgf000009_0001
the appropriate application 116 on the destination node 120. The transport layer 106 bundles/organizes the data into data packets 112 according to a specific protocol (i.e., packetization). For instance, the transport layer 106 may use various communication protocols such as, but not limited to, Transmission Control Protocol/Internet protocol (TCP/IP) for providing host-to-host communication services such as connection-oriented communication, reliability, flow control, and multiplexing.
[0033] The data packets 112 are transferred to a network layer 108 of the source node 110. The network layer 108 is responsible for packet forwarding including routing of the data packets 112 through one or more network nodes 114 (e.g., routers or switches) of the communication network 130. The communication network 130 can comprise multiple interconnected networks including a local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a wireless or mobile network, and an inter-network (e.g., the Internet). When the data packets 112 reach the destination node 120, data 104 is extracted from the data packets 112 (i.e., depacketized) and passed to the application 116 on the destination node 120.
[0034] FIG. 2 is a schematic diagram illustrating an example of a data packet 200 that can be communicated over the communication network 130 of FIG. 1. The data packet 200 is similar to the data packet 112 of FIG. 1. The data packet 200 includes an IP header (IP HDR) 202 and a payload 204. The IP HDR 202 contains routing information and information (e.g., an identification tag) that enables the data packets 200 to be reassembled after transmission to produce the data 104. For instance, IP networks, such as the Internet, are normally not secure, so the data packet 200 can be lost, can be delayed, and can arrive in the wrong order. The identification tag helps to identify the data packet 200 and to reassemble the data 104 back to its original form. The IP HDR 202 can also contain a checksum and a time to live (TTL) value. The checksum is used for error detection and correction during packet transmission. The TTL value is used to reduce redundant packets in the communication network 130. The payload 204 of the data packet 200 contains the actual data being carried by the data packet 200.
[0035] Currently, within the communication network 130 (e.g., the Internet) packet forwarding is performed based on quality of service (QoS) techniques. The QoS function ensures that data packets 200 that are marked with higher priority are scheduled earlier than data packets 200 that are marked with lower or normal priorities. As a consequence, if outgoing buffers or queues of a network node 114 are full, the lower priority data packets 200 get completely dropped. Any error, due to link congestion or intermittent packet loss in the communication network 130, can trigger re-transmission of the data packets 200. Re-transmission of the data packets 200 wastes network resources, reduces the overall throughput of the connection, and causes longer latency for the packet delivery. The result is that there can be unpredictable delays in the destination node 120 receiving the data packets 200, a significant increase in the network load of the communication network 130, and network resources/capacity waste. Emerging network applications, such as holographic telepresence, tactile Internet, etc. require extremely low latency. Thus, the current way of handling the packet error or network congestion by discarding the data packet 200 entirely is not optimal.
[0036] As previously described, to address the above problem, a recently new network service referred to as QC seeks to avoid dropping an entire packet by breaking the packet into smaller logical units (referred to as chunks). For instance, using QC, when there is network congestion, a network node (e.g., a router) could perform packet scrubbing or packet washing on the packet with the granularity of chunks depending on congestion level, policy, and chunk meta-data. The packet wash operation is a function performed by a network node 114 to modify a size of a data packet by removing one or more chunks from a payload of a qualitative packet. In some embodiments, the packet wash operation can add or restore one or more chunks to a packet payload (e.g., a chunk that was previously removed from an earlier packet) while the data packet is en route from a source node to a destination node.
[0037] FIG. 3 is a schematic diagram illustrating a data packet 300 that supports a packet wash operation in accordance with an embodiment of the present disclosure. The data packet 300 includes the IP HDR 202, a packet wash (PW) specification 206, and the payload 204. In an embodiment, the source node 110 creates the PW specification 206. The PW specification 206 can identify the packet as being eligible for PW and implicitly or explicitly describe the significance of the bytes or data payload portions of the payload 204. During the packetization process, the source node 110 breaks the data into a plurality of data payload portions (i.e., chunks of data). For example, in the depicted embodiment, the data payload for the data packet 300 is broken into chunk 4 (C4) 208, chunk 3 (C3) 210, chunk 2 (C2) 212, and chunk 1 (Cl) 214 based on the PW specification 206. The number of chunks that the payload 204 has may vary depending on the level of granularity applied to the significance of the bytes. Each chunk may be associated with particular attributes such as, but not limited to, a priority level or significance value of the chunk. In some embodiments, a binary value (e.g., 0 or 1) can be assigned to each chunk indicating whether the chunk is significant/required or insignificant/disposable. Alternatively, each chunk can be assigned a value within a range (e.g., 0-9) to provide greater granularity of the significance or priority level of a chunk of data. Alternatively, the chunks priority can be indicated by their arrangement in the packet (e.g., earlier chunks have higher priority than later chunks, or reverse). The chunks of data may vary in size (i.e., one chunk may contain more information than another chunk). In an embodiment, the network node 114 performs the packet wash operation by dropping lower-priority chunks from the payload 204 of the data packet 300 according to the information in the PW specification 206 while retaining as much information as possible based on the current network condition. As a non-limiting example for video streaming, the source node 110 could rearrange the bits in the payload 204 such that the first consecutive chunks contain the base layer that encodes the basic video quality, while the next consecutive chunks contain the enhancement layers (e.g., higher signal-to-noise ratio, higher resolution, and higher frame rate). If congestion or other satisfying network condition occurs, the network node 114 can intentionally remove as many of the chunks containing the enhancement layers as necessary without having to request that the data packet 300 be retransmitted by the source node 110. Additionally, the chunks in the packet payload 204 may have a certain relationship among each other. For example, a network coding scheme can be applied where the chunks are linearly coded from the original chunks in the payload and are linearly independent from each other. In this embodiment, dropping any of the linearly coded chunks and keeping the rest of the chunks would still enable the receiver to recover the original data contained in the packet payload.
[0038] FIG. 4 is a schematic diagram illustrating a packet wash operation in accordance with an embodiment of the present disclosure. In the depicted embodiment, the application 102 on the source node 110 creates the packet wash operation specification that specifies that the data for the packet can be split into four chunks of data (C4, C3, C2, and Cl). The packet wash operation specification can also provide attributes and conditions associated with each of the chunks of data. The attributes indicate the level of significance for each of the chunks of data. The attributes can also indicate the type of information contained in each of the chunks of data. The conditions specify when the packet wash operation can occur. The conditions may also specify when packet retransmission should be requested. The data and the packet wash operation specification are passed to the transport layer 106 for packetization. The transport layer 106 creates a data packet based on the packet wash operation specification. In an embodiment, the data packet may include a flag or a packet wash operation field to indicate that the data packet supports the packet washes operation. Alternatively, the inclusion of a packet wash operation specification in the data packet indicates that the data packet supports the packet wash operation. The packet wash supported data packet (also referred to herein as a qualitative packet) is passed to the network layer 106, which transmits the qualitative packet to the destination node 120 over the communication network 130. In an embodiment, when the intermediate routers (e.g., network node 114 in FIG. 1 and FIG. 4) on the communication network 130 receives the qualitative packet, if the network conditions are normal, the network node 114 will forward the qualitative packet just like a normal data packet (i.e., a non-qualitative packet). However, if network conditions at the network node 114 do not enable the qualitative packet to be forwarded without modification, the network node 114 will perform a packet wash operation based on the packet wash operation specification of the data packet if the conditions for performing the packet wash operation are met. Alternatively, in some embodiments, even when the network conditions are normal, the network node 114 may perform the packet wash operation based on one or more other conditions. For instance, if there is a bottleneck further down in the network, and there is a mechanism to notify nodes upstream of this congestion at the bottleneck, some chunks could be dropped earlier. This would avoid transmitting chunks that would need to be dropped at the bottleneck. Other conditions may be some form of policy enforcement. For example, a user may have a Service Level Agreement (SLA) for a certain amount of bandwidth, and chunks can be dropped if the user exceeds the bandwidth of the SLA.
[0039] As a non-limiting example, in the depicted embodiment, based on the network condition and the packet wash operation specification, the network node 114 removes the chunk 4 (C4) 208 from the data packet and forwards the remaining data packet towards the destination node 120. In some embodiments, a new washed data packet may be generated with the remaining chunks of the data packet and the original data packet may be discarded. Alternatively, in some embodiments, one or more chunks of data are removed from the original data packet, and the remaining chunks of the original data packet are forwarded. However, if the conditions for performing the packet wash operation are not met and the network conditions do not support forwarding the data packet, the network node 114 will drop the data packet and send a request to the source node 110 for retransmission of the data packet. When the data packet arrives at the destination node 120, the data packet is depacketized, and the packet wash operation specification and the data are passed to the application 116. In some embodiments, the application 116 can utilize the packet wash operation specification to determine if the data has been packet washed and the type of data that was removed. The application 116 may provide a user some indication or notification regarding the data that was not received. Thus, the data that is received at the destination node 120 is not required to be exactly the same as what is sent by the source node 110. However, the received partial or degraded data is still useful to the application 116. For example, if the dropped data is enhancement layers, the video can still be displayed in basic form. In some embodiments, the discarded data can be recovered from data received from prior data packets. For example, if the application 116 determines that the discarded data corresponds to a background color or other item (e.g., color of a car) or corresponds to an image that was previously received (e.g., a page of slide presentation that has not changed since the last packet), the application 116 can recover the discarded data by using the data from previous packets. Thus, in some embodiments, the data that is received may be repaired and recovered prior to being rendered.
[0040] As stated above, one issue with current routers/switches is that it is difficult to remove chunks from a packet once the packet has been written into a buffer. For example, when multiple packets can be written onto a buffer and for a packet in the middle, there is no easy way to remove a chunk. Instead, all data from the buffer must be copied, the chunk data removed, and the data rewritten back into the buffer, which is not feasible because input/output (I/O) operations on a buffer are costly and time consuming. To address the above technical issue, the disclosed embodiments enable an efficient mechanism to process qualitative packets in a network node. In an embodiment, the network node allocates a number of buffer queues for each outgoing port of a network node. The network node uses the buffer queues to separate the chunks of a packet into different queues. Thus, to remove a chunk, the network node can pull and discard a chunk from a corresponding queue without having to copy and rewrite data to the buffer, thereby increasing the efficiency of the network node.
[0041] FIG. 5 is a schematic diagram illustrating a routing architecture of a network node 500 in accordance with an embodiment of the present disclosure. The network node 500 may be an example of the network node 114 in FIG. 4. The network node 500 includes a plurality of input ports 502, a switching fabric 504, and a plurality of output ports 506. The plurality of input ports 502 perform the physical layer function of terminating an incoming physical link at the network node 500, and perform link-layer functions needed to interoperate with the link layer at the other side of the incoming link. In some embodiments, a lookup function is also performed at the input ports 502 where a forwarding table is consulted here to determine an output port 506 to which an arriving packet will be forwarded via the switching fabric 504. The switching fabric 504 may be a combination of hardware and software that controls traffic to and from the network node 500 with the use of multiple switches (i.e., data comes in one port and out on another port). In some embodiments, a routing processor (not depicted) may be coupled to the switching fabric 504 for executing routing protocols, computing the forwarding table, and perform other routing functions. Each output port 506 has an output buffer 508. The plurality of output ports 506 store packets received from the switching fabric 504 in their corresponding output buffer 508 and transmits these packets from the output buffers 508.
[0042] In the depicted embodiment, the network node 500 allocates a number of buffer queues 510 for each of the output buffers 508. For example, FIG. 5 depicts three buffer queues 510 allocated in each of the output buffers 508. In other embodiments, the network node 500 can have input queueing (where the buffers are attached to the incoming ports/links), output queuing (where the buffers are attached to the outgoing ports/links), or combined Input/Output queueing (CIOQ) with both input and output buffers. In some embodiments, multiple output buffers 508 could be attached to a link or port to implement priority queueing where the network node 500 is configured to pull chunks of a packet from the buffer with highest priority first, or implement round robin queuing where the network node 500 is configured to pull chunks of a packet from each of the queues in an pre-specified order and independent of the number of packets or chunks in the queues (e.g., pulling chunks of a packet from buffer 1, buffer 2, up to buffer n, then back to buffer 1 again.) In addition, to increase efficiency, in some embodiments, multiple steps may be performed in parallel. For example, chunks of a packet may be pulled from a high priority buffer queue for writing to an outgoing packet while chunks from a low priority buffer queue are pulled and dropped.
[0043] The disclosed embodiments are not limited by the number of buffer queues 510 allocated for an output buffer 508. Additionally, although FIG. 5 illustrates three buffer queues 510 allocated in each of the output buffers 508, in some embodiments, there may be a different number of buffer queues 510 allocated in different output buffers 508. As will be further described, the network node 500 uses the buffer queues 510 to separate the chunks of a packet so one or more chunks that have been written to an output buffer 510 can be efficiently dropped if needed during a packet wash operation.
[0044] In some embodiments, the switching fabric 504 will hard-wire the bit stream to an output buffer 508. In these embodiments, a "distinguisher" function may be added to the hardware to distinguish chunks and hand-off the wire-connection between the switching fabric 504 and the output buffer 508 for writing/inserting chunks in a particular buffer queue 510 of the output buffer 508.
[0045] FIG. 6 is a schematic diagram illustrating a buffer queue mechanism 600 in accordance with an embodiment of the present disclosure. The buffer queue mechanism 600 can be implemented by a network node such as network node 114 in FIG. 4. In the depicted embodiment, the network node allocates three buffer queues QI, Q2, and Q3. The buffer queues QI, Q2, and Q3 are configured as fir st-in/fir st-out (FIFO) queues. As stated above, the number of buffer queues may vary in different embodiments. Buffer queues QI, Q2, and Q3 may be buffer queues for an input port/link or an output port/link.
[0046] As shown in FIG. 6, the network node receives a packet having a payload comprising three chunks (chunk 1, chunk 2, and chunk 3). The number of chunks in a packet payload may vary. Additionally, the number of chunks received in a packet may be less than the original number of chunks in a packet payload if another network node previously performed packet washing on the packet (i.e., dropped one or more chunks prior to the network node receiving the packet).
[0047] In an embodiment, all chunks within a packet are treated the same (i.e., all chunks have the same priority and any chunk can be dropped). In this example, the network node inserts/writes chunks in the order as contained in the packet in a corresponding buffer queue (e.g., chunk 1 in QI, chunk 2 in Q2, and chunk 3 in Q3). The new chunks are placed at the end or top of the buffer queue. In this embodiment, the network node pulls chunks of a packet using a round robin pulling policy ( i.e., pull from buffer QI first, Q2 second, Q3 last, and then back QI, ...). For example, in an embodiment, when writing a packet to the wire (i.e., output port/link), if there is no congestion, the disclosed embodiment pulls each chunk (in its original order) to be written and transmitted on the wire. By pulling the chunks in a round robin manner, the recomposed packet is identical to the original packet.
[0048] It should be noted that because all the chunks are treated the same in this example, in an alternative embodiment, chunk 1, chunk 2, and chunk 3 may be placed in any of the three buffer queues (e.g., chunk 3 in QI, chunk 2 in Q2, and chunk 1 in Q3). Alternatively, in some embodiments, each chunk may be associated with a priority level or some other indicator that identifies if a chunk can be dropped. In these embodiments, the network node determines the priority of the chunks in the packet and inserts the chunks in the appropriate buffer queues. For example, if the network node is configured to always pull chunks from QI first, Q2 second, and Q3 last, then the highest priority chunk or chunks are placed in buffer queue QI, the second highest priority chunks are placed in buffer queue Q2, and the lowest priority chunks are placed in buffer queue Q3.
[0049] In an embodiment, the buffer queues are mapped to DiffServ Code Points (DSCP). DSCP is a means of classifying and managing network traffic and of providing quality of service (QoS) in modern Layer 3 IP networks. DSCP uses the DS field in IPv4 and IPv6 packet headers to carry one of 64 distinct DSCP values for the purpose of packet classification. Thus, the disclosed embodiments can be combined with DSCP to ensure that packets are treated differently according to their DSCP. For instance, expedited forwarding (EF) packets (identified by DSCP value 46) can be buffered in one or more separate queues from Best Effort (BE) packets (identified by DSCP value 0). The network node can then be configured to pull from the buffer queues for EF packets before pulling from the buffer queues for BE packets.
[0050] Additionally, as described in FIG. 3, the packet can also include packet header information and other non-payload data (e.g., PW specification 206 in FIG. 3). Although not depicted in FIG. 6, in an embodiment, the packet header information and other non-payload data may be placed in the same buffer queue as the highest priority chunk or chunks and are pulled along with the corresponding chunks of the packet from the buffer queue when the network is ready to transmit the packet. Alternatively, in some embodiments, the packet header information and other non-payload data may be stored by the network node in another memory location or in a separate buffer.
[0051] In an embodiment, under normal network conditions (e.g., the network is not congested), when the output port is available to send a packet, the network node pulls all the chunks of a packet from the bottom/beginning of the buffer queues (i.e., the chunks of the oldest packet in the buffer queues), generates the packet having a payload of all the pulled chunks, and transmits the packet towards the intended destination of the packet. Thus, when there is no congestion, the packet is forwarded in its entirety.
[0052] In an embodiment, when there is network congestion that causes the data in the buffer queues to exceed a first threshold amount (e.g., threshold 1 in FIG. 6), the network node is configured to drop a chunk from the packet. Other conditions for dropping one or more chunks as previously described are possible. The first threshold is not limited to any particular amount. For example, as shown in FIG. 6, because the threshold 1 has been met, and assuming chunk 3 is a droppable chunk, the network node pulls the bottom chunks from each of the buffer queues (i.e., chunk 1, chunk 2, and chunk 3 of a packet) and drops chunk 3 from the packet. The packet is then forwarded with just chunk 1 and chunk 2. By dropping a chunk, the network node increases processing capacity to help alleviate the network at the network node. For example, the effective processing capacity is increased by 50% as 3 packets can be transmitted after the first threshold is met, versus only 2 packets before the first threshold is reached. As a consequence, the congestion in the router is reduced faster as the switch send packets out (and empty the output buffers) at a rate that is increased by 50% (3 packets versus 2 packets). If the next congestion threshold is met, only the first chunk + header is transmitted, and the 2nd and 3rd chunks are dropped from their respective buffer. The transmission rate after that threshold is met is then 3 packets transmitted when only 1 packet was transmitted in the same time before the first threshold.
[0053] In some embodiments, additional threshold levels may be configured at the network node. For example, when the congestion reaches a second threshold (e.g., threshold 2 in FIG. 6), then the network node will drop two chunks from the packet (e.g., drop both chunk 2 and chunks 3 assuming that these chunks can be dropped) and only the first chunk and header/non-payload information is transmitted in a packet. More thresholds can be added, until the congestion reaches a threshold (e.g., when the buffer queue capacity is full) such that all chunks are dropped and only the header or other non-payload data is forwarded, or alternatively, the entire packet is dropped (chunks and header). Thus, the disclosed embodiment enables an efficient mechanism to process qualitative packets at a network node using existing router architecture.
[0054] FIG. 7 is a schematic diagram illustrating a buffer queue mechanism 700 in accordance with an embodiment of the present disclosure. The buffer queue mechanism 700 is similar to the buffer queue mechanism 600 described in FIG. 6, except that in this embodiment, there are more buffer queues (buffer queues QI, Q2, and Q3) than the number of chunks (chunk 1 and chunk 2) that is received in a packet. This situation may occur if the original packet had less chunks than the number of buffer queues allocated for a port at the receiving network node or if the packet was washed (e.g., a chunk was dropped) by another network node prior to the current network node receiving the packet.
[0055] As described above, the network node writes chunk 1 of the packet to buffer queue QI and writes chunk 2 to buffer queue Q2 (assuming chunk 1 has a higher or same priority as chunk 2). In an embodiment, the network node writes a dummy (D) chunk to buffer queue Q3 to maintain packet alignment of the chunks in the buffer queues. The dummy chunk may contain metadata or some type of identifier that indicates to the network node that the chunk is to be discarded when the chunk is pulled from buffer queue Q3. Additional dummy junks may be used if there are multiple buffer queues that do not contain a chunk for a particular packet.
[0056] Other methods to distinguish the packets are also possible. For example, in an alternative embodiment, a distinguisher function is built into the hardware to distinguish chunks of the same packets based on a label or packet identifier that is associated with each chunk of packet. In these embodiments, a buffer queue may be able to store more than one chunk of a packet per queue. The packet identifier associated with the chunks enables the network node to identify which packet a chunk belongs to so as to avoid asynchronicity (i.e., chunks of different packets being pulled) if a chunk is lost from one of the buffer queues. Alternatively, in another embodiment, if there is no such label, a reset mechanism can be called at some periodic interval (to be specified) to ensure that the recomposed packets only include chunks from the same initial packet.
[0057] FIG. 8 is a schematic diagram illustrating a buffer queue mechanism 800 in accordance with an embodiment of the present disclosure. In this embodiment, the network node receives a packet having more chunks than the number of buffer queues allocated for the particular port/link. For example, as shown in FIG. 8, the network node receives a packet having a four chunk payload (chunk 1, chunk 2, chunk 3, and chunk 4), but only has three buffer queues (buffer queues QI, Q2, and Q3). For ease of explanation, assume that the chunks arrive arranged in order of priority (i.e., chunk 1 has the highest priority and chunk 4 has the lowest priority). If the chunks do not arrive arranged in order of priority, the network node can be configured to determine the priority of each of the chunks in a packet based on a priority assigned to each of the chunks. In an embodiment, the network node is configured to write the lowest priority chunk (in this example, chunk 4) to the last or lowest priority buffer queue Q3, the second lowest priority chunk (chunk 3) to buffer queue Q2, and both chunk 1 and chunk 2 of the packet are written to buffer queue QI. In an embodiment, the size allocated to the buffer queue QI may be larger than the other buffer queues to accommodate for more chunks. To transmit a packet, the network node pulls all the chunks of a packet from buffer queue QI first, then buffer queue Q2, and finally buffer queue Q3. If there is congestion, the network node can drop the lowest priority chunk in buffer queue Q3, and if the congestion exceeds a second threshold, the network node can drop the chunk in buffer queue Q2 next, and so on.
[0058] In an alternative embodiment, the network node can configure each buffer queue to hold floor(number of chunks/number of queues) chunks and the first queue to hold the remainder of the packet. As an example, if a packet includes 7 chunks and the network node has 3 buffer queues, then each buffer queue will hold floor(7/3), which is 2 chunks each, and the first queue QI will hold the remainder (i.e., 3 total). As described above, the 2 lowest priority chunks are written to buffer queue Q3, the next 2 lowest priority chunks are written to buffer queue Q2, and the remaining 3 chunks of the packet are written to buffer queue QI. Similarly, one or more of the chunks from buffer queue Q3 can be dropped if a first congestion threshold is met, one or more of the chunks from buffer queue Q2 can be dropped if a second congestion threshold is met, and one or more of the chunks from buffer queue QI can be dropped if a third congestion threshold is met. [0059] FIG. 9 is a flowchart illustrating a method 900 for processing qualitative packets in accordance with an embodiment of the present disclosure. The method 900 can be performed by a network node such as router or switch (e.g., network node 114 in FIG. 4). The method 900 includes the network node receiving, at step 902, an incoming packet having a packet payload comprising a plurality of chunks. The network node, at step 904, inserts chunks in the plurality of chunks in separate buffer queues of an output port. As described above, the chunks are inserted at the end of a buffer queue. The buffer queues hold the chunks until a line of the output port is available to transmit data. In some embodiments, an output port may have multiple lines and there may be a different set of buffer queues for each line of the output port. Alternatively, there may be a single set of buffer queues for all lines of an output port. In an embodiment, the network node determines a number of chunks in the plurality of chunks, and inserts one chunk into each of the buffer queues when the number of chunks equals the number of queues for the output port (e.g., 3 chunks and 3 buffer queues, each buffer queue holding one chunk of the packet). In an embodiment, when the number of chunks is less than the number of queues for the output port, the network node inserts one or more dummy chunks into one or more buffer queues in insert. In another embodiment, when the number of chunks is greater than the number of queues for the output port, the network node inserts multiple chunks of a packet into one or more buffer queues (i.e., a single buffer queue can hold multiple chunks of a packet). Alternatively, the network node can determine a priority associated with each of the chunks and insert the chunks in separate buffer queues according to the priority associated with each of the chunks.
[0060] The network node, at step 906, pulls all chunks of a packet from a beginning of the buffer queues (referred to as a first set of chunks) when a line of the output port is available to transmit data. For example, the network node may employ a round robin pulling policy and pull one chunk from each of the buffer queues per cycle until all the chunks of a packet are removed from the separate buffer queues. Alternatively, the network node may be configured to pull one or more of the chunks from the buffer queues based on a priority level associated with a buffer queue or a priority level associated with chunk stored in the buffer queue. As described herein, the network node, at step 908, may drop one or more chunks pulled from the first set of chunks based on a network node determination for form a second set of chunks (i.e., all chunks of a packet - dropped chunks = second set of chunks). For example, the network node determination may be based on a congestion level of the network node. For instance, if the congestion level of the network node exceeds a first threshold, the network node may drop a chunk of the packet. Similarly, the network node may drop additional chunks pulled from the buffer queues in response to the congestion level exceeding a second threshold, and so on. As described above, the network node determination may be based on other conditions besides a congestion level of the network node. Further, in some embodiments, the network node may pull chunks from a buffer queue based on a label or other packet identifier associated with the chunk. For instance, in some embodiments, a buffer queue may hold multiple chunks of a packet. During the pulling process, the network node pulls all chunks having a same packet identifier from a buffer queue. The network node, at step 910, transmits an outgoing packet that includes the second set of chunks over the line of the output port.
[0061] FIG. 10 is a schematic architecture diagram of an apparatus 1000 according to an embodiment of the disclosure. The apparatus 1000 is suitable for implementing the disclosed embodiments as described herein. For example, in an embodiment, the network node 114 in FIG.
4 can be implemented using the apparatus 1000. In various embodiments, the apparatus 1000 can be deployed as a router, a switch, and/or other network nodes within a network.
[0062] The apparatus 1000 comprises receiver units (Rx) 1020 or receiving means for receiving data via ingress/input ports 1010; a processor 1030, logic unit, central processing unit (CPU) or other processing means to process instructions; transmitter units (TX) 1040 or transmitting means for transmitting via data egress/output ports 1050; and a memory 1060 or data storing means for storing the instructions and various data.
[0063] The processor 1030 may be implemented as one or more CPU chips, cores (e.g., as a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and digital signal processors (DSPs). The processor 1030 is communicatively coupled via a system bus with the ingress ports 1010, RX 1020, TX 1040, egress ports 1050, and memory 1060. The processor 1030 can be configured to execute instructions stored in memory 1060. Thus, the processor 1030 provides a means for determining, creating, indicating, performing, providing, or any other action corresponding to the claims when the appropriate instruction is executed by the processor 1030.
[0064] The memory 1060 can be any type of memory or component capable of storing data and/or instructions. For example, the memory 1060 may be volatile and/or non-volatile memory such as read-only memory (ROM), random access memory (RAM), ternary content-addressable memory (TCAM), and/or static random-access memory (SRAM). The memory 1060 can also include one or more disks, tape drives, and solid-state drives and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution. In some embodiments, the memory
1060 can be memory that is integrated with the processor 1030.
[0065] In one embodiment, the memory 1060 stores a qualitative packet processing module 1070. The qualitative packet processing module 1070 includes data and executable instructions for implementing the disclosed embodiments. For instance, the qualitative packet processing module 1070 can include instructions for implementing the method 900 in FIG. 9 as described herein. The inclusion of the qualitative packet processing module 1070 substantially improves the functionality of the apparatus 1000 by enabling QC and New IP within the current existing router architecture.
[0066] Accordingly, the disclosed embodiments enable an efficient mechanism to process qualitative packets in a network node and provide several improvements over existing technology such as, but not limited to, reducing the number of read/write operations (e.g., read each chunk once and pass it on to the link layer, as opposed to having to read an entire packet), improving efficiency by performing certain steps in parallel (e.g., dropping a chunk from a low priority queue while at the same time pulling chunks from the high priority queue), and eliminate the need to rebuffer chunks of a packet (e.g., currently, if the chunk that is dropped is not from the tail of the packet, then the network node has to write part of the packets to a buffer to remove a chunk in the middle).
[0067] The disclosed embodiments may be a system, an apparatus, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device. [0068] While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
[0069] In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.

Claims

CLAIMS What is claimed is:
1. A method of processing packets implemented by a network node, the method comprising: receiving an incoming packet having a packet payload comprising a plurality of chunks; inserting chunks from the plurality of chunks into separate buffer queues of an output port, wherein the chunks from the incoming packet are inserted at an end of the separate buffer queues; pulling a first set of chunks from a beginning of the separate buffer queues when a line of the output port is available to transmit data, wherein the first set of chunks comprises all chunks of a packet; dropping, based on a network node determination, one or more of the chunks from the first set of chunks to form a second set of chunks; and transmitting an outgoing packet comprising the second set of chunks over the line.
2. The method of claim 1, wherein inserting the chunks from the plurality of chunks into the separate buffer queues of the output port comprises: determining a first number of chunks in the plurality of chunks; and inserting one chunk into each of the separate buffer queues when the first number of chunks equals a second number of queues in the separate buffer queues.
3. The method of claim 1, wherein inserting the chunks from the plurality of chunks into the separate buffer queues of the output port comprises: determining a first number of chunks in the plurality of chunks; and inserting one or more dummy chunks into one or more buffer queues when the first number of chunks is less than a second number of queues in the separate buffer queues.
4. The method of claim 1, wherein inserting the chunks from the plurality of chunks into the separate buffer queues of the output port comprises: determining a first number of chunks in the plurality of chunks; and inserting multiple chunks of the plurality of chunks into a buffer queue of the separate buffer queues when the first number of chunks is greater than a second number of queues in the separate buffer queues.
5. The method according to any of claims 1-4, further comprising: determining a priority associated with each of the chunks; and inserting the chunks in the separate buffer queues according to the priority associated with each of the chunks.
6. The method according to any of claims 1-5, further comprising pulling the first set of chunks from the separate buffer queues using a round robin pulling policy.
7. The method according to any of claims 1-5, further comprising pulling the first set of chunks from the separate buffer queues based on a priority level associated with a buffer queue.
8. The method according to any of claims 1-7, wherein the network node determination is dropping a chunk from a buffer queue in response to a congestion level of the network node exceeding a first threshold.
9. The method according to any of claims 1-8, wherein the network node determination is dropping multiple chunks from one or more of the separate buffer queues in response to a congestion level of the network node exceeding a second threshold.
10. The method according to any of claims 1-9, wherein all the chunks of the packet are pulled from the separate buffer queues based on the chunks having a same packet identifier.
11. A network node comprising at least a processor and a memory storing instructions, wherein the instructions when executed by the processor causes the network node to: receive an incoming packet having a packet payload comprising a plurality of chunks; insert chunks from the plurality of chunks into separate buffer queues of an output port, wherein the chunks from the incoming packet are inserted at an end of the separate buffer queues; pull a first set of chunks from a beginning of the separate buffer queues when a line of the output port is available to transmit data, wherein the first set of chunks comprises all chunks of a packet; drop, based on a network node determination, one or more of the chunks from the first set of chunks to form a second set of chunks; and transmit an outgoing packet comprising the second set of chunks over the line.
12. The network node of claim 11, wherein the instructions for inserting the chunks from the plurality of chunks into the separate buffer queues of the output port further comprise instructions to: determine a first number of chunks in the plurality of chunks; and insert one chunk into each of the separate buffer queues when the first number of chunks equals a second number of queues.
13. The network node of claim 11, wherein the instructions for inserting the chunks from the plurality of chunks into the separate buffer queues of the output port further comprise instructions to: determine a first number of chunks in the plurality of chunks; and insert one or more dummy chunks into one or more buffer queues when the first number of chunks is less than a second number of queues in the separate buffer queues.
14. The network node of claim 11, wherein the instructions for inserting the chunks from the plurality of chunks into the separate buffer queues of the output port further comprise instructions to: determine a first number of chunks in the plurality of chunks; and insert multiple chunks of the plurality of chunks into a buffer queue of the separate buffer queues when the first number of chunks is greater than a second number of queues in the separate buffer queues.
15. The network node according to any of claims 11-14, wherein the instructions when executed by the processor further causes the network node to: determine a priority associated with each of the chunks; and insert the chunks in the separate buffer queues according to the priority associated with each of the chunks.
16. The network node according to any of claims 11-15, wherein the instructions when executed by the processor further causes the network node to pull the first set of chunks from the separate buffer queues using a round robin pulling policy.
17. The network node according to any of claims 11-15, wherein the instructions when executed by the processor further causes the network node to pull the first set of chunks from the separate buffer queues based on a priority level associated with a buffer queue.
18. The network node according to any of claims 11-17, wherein the network node determination is to drop a chunk from a buffer queue in response to a congestion level of the network node exceeding a first threshold.
19. The network node according to any of claims 11-18, wherein the network node determination is to drop multiple chunks from one or more of the separate buffer queues in response to a congestion level of the network node exceeding a second threshold.
20. The network node according to any of claims 11-19, wherein all the chunks of the packet are pulled from the separate buffer queues based on the chunks having a same packet identifier.
21. A computer program product stored on a tangible non-transitory medium, the computer program product comprising instructions that when executed by a processor of an apparatus causes the apparatus to: receive an incoming packet having a packet payload comprising a plurality of chunks; insert chunks from the plurality of chunks into separate buffer queues of an output port, wherein the chunks from the incoming packet are inserted at an end of the separate buffer queues; pull a first set of chunks from a beginning of the separate buffer queues when a line of the output port is available to transmit data, wherein the first set of chunks comprises all chunks of a packet; drop, based on an apparatus determination, one or more of the chunks from the first set of chunks to form a second set of chunks; and transmit an outgoing packet comprising the second set of chunks over the line.
22. The computer program product of claim 21, wherein the instructions for inserting the chunks from the plurality of chunks into the separate buffer queues of the output port further comprise instructions to: determine a first number of chunks in the plurality of chunks; and insert one chunk into each of the separate buffer queues when the first number of chunks equals a second number of queues.
23. The computer program product of claim 21, wherein the instructions for inserting the chunks from the plurality of chunks into the separate buffer queues of the output port further comprise instructions to: determine a first number of chunks in the plurality of chunks; and insert one or more dummy chunks into one or more buffer queues when the first number of chunks is less than a second number of queues in the separate buffer queues.
24. The computer program product of claim 21, wherein the instructions for inserting the chunks from the plurality of chunks into the separate buffer queues of the output port further comprise instructions to: determine a first number of chunks in the plurality of chunks; and insert multiple chunks of the plurality of chunks into a buffer queue of the separate buffer queues when the first number of chunks is greater than a second number of queues in the separate buffer queues.
25. The computer program product according to any of claims 21-24, wherein the instructions when executed by the processor further causes the apparatus to: determine a priority associated with each of the chunks; and insert the chunks in the separate buffer queues according to the priority associated with each of the chunks.
26. The computer program product according to any of claims 21-25, wherein the instructions when executed by the processor further causes the apparatus to pull the first set of chunks from the separate buffer queues using a round robin pulling policy.
27. The computer program product according to any of claims 21-25, wherein the instructions when executed by the processor further causes the apparatus to pull the first set of chunks from the separate buffer queues based on a priority level associated with a buffer queue.
28. The computer program product according to any of claims 21-27, wherein the apparatus determination is to drop a chunk from a buffer queue in response to a congestion level of the apparatus exceeding a first threshold.
29. The computer program product according to any of claims 21-28, wherein the apparatus determination is to drop multiple chunks from one or more of the separate buffer queues in response to a congestion level of the apparatus exceeding a second threshold.
30. The computer program product according to any of claims 21-29, wherein all the chunks of the packet are pulled from the separate buffer queues based on the chunks having a same packet identifier.
PCT/US2022/042453 2022-09-02 2022-09-02 An efficient mechanism to process qualitative packets in a router WO2024049442A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2022/042453 WO2024049442A1 (en) 2022-09-02 2022-09-02 An efficient mechanism to process qualitative packets in a router

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2022/042453 WO2024049442A1 (en) 2022-09-02 2022-09-02 An efficient mechanism to process qualitative packets in a router

Publications (1)

Publication Number Publication Date
WO2024049442A1 true WO2024049442A1 (en) 2024-03-07

Family

ID=83457491

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/042453 WO2024049442A1 (en) 2022-09-02 2022-09-02 An efficient mechanism to process qualitative packets in a router

Country Status (1)

Country Link
WO (1) WO2024049442A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030137936A1 (en) * 2002-01-24 2003-07-24 Jerome Cornet System and method for reassembling packets in a network element
US20050100035A1 (en) * 2003-11-11 2005-05-12 Avici Systems, Inc. Adaptive source routing and packet processing
WO2021101640A1 (en) * 2020-05-30 2021-05-27 Futurewei Technologies, Inc. Method and apparatus of packet wash for in-time packet delivery

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030137936A1 (en) * 2002-01-24 2003-07-24 Jerome Cornet System and method for reassembling packets in a network element
US20050100035A1 (en) * 2003-11-11 2005-05-12 Avici Systems, Inc. Adaptive source routing and packet processing
WO2021101640A1 (en) * 2020-05-30 2021-05-27 Futurewei Technologies, Inc. Method and apparatus of packet wash for in-time packet delivery

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ALBALAWI ABDULAZAZ ET AL: "Enhancing End-to-End Transport with Packet Trimming", GLOBECOM 2020 - 2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE, IEEE, 7 December 2020 (2020-12-07), pages 1 - 7, XP033882780, DOI: 10.1109/GLOBECOM42002.2020.9322506 *

Similar Documents

Publication Publication Date Title
US11968116B2 (en) Method and system for facilitating lossy dropping and ECN marking
US8493867B1 (en) Retransmission and flow control in a logical network tunnel
US8248930B2 (en) Method and apparatus for a network queuing engine and congestion management gateway
US8917741B2 (en) Method of data delivery across a network
US9203690B2 (en) Role based multicast messaging infrastructure
CN115152193A (en) Improving end-to-end congestion reaction for IP routed data center networks using adaptive routing and congestion hint based throttling
US8935329B2 (en) Managing message transmission and reception
US10461886B2 (en) Transport layer identifying failure cause and mitigation for deterministic transport across multiple deterministic data links
WO2020210780A1 (en) Chunk based network qualitative services
CN114731335A (en) Apparatus and method for network message ordering
US8514700B2 (en) MLPPP occupancy based round robin
WO2020163124A1 (en) In-packet network coding
US9967106B2 (en) Role based multicast messaging infrastructure
US8054847B2 (en) Buffer management in a network device
CN111865813B (en) Data center network transmission control method and system based on anti-ECN mark and readable storage medium
US20230163875A1 (en) Method and apparatus for packet wash in networks
CN111431812B (en) Message forwarding control method and device
WO2024049442A1 (en) An efficient mechanism to process qualitative packets in a router
JP3587080B2 (en) Packet buffer management device and packet buffer management method
CN115022227B (en) Data transmission method and system based on circulation or rerouting in data center network
CN110300069B (en) Data transmission method, optimization device and system
US20240056385A1 (en) Switch device for facilitating switching in data-driven intelligent network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22777851

Country of ref document: EP

Kind code of ref document: A1