WO2020210780A1 - Chunk based network qualitative services - Google Patents

Chunk based network qualitative services Download PDF

Info

Publication number
WO2020210780A1
WO2020210780A1 PCT/US2020/027876 US2020027876W WO2020210780A1 WO 2020210780 A1 WO2020210780 A1 WO 2020210780A1 US 2020027876 W US2020027876 W US 2020027876W WO 2020210780 A1 WO2020210780 A1 WO 2020210780A1
Authority
WO
WIPO (PCT)
Prior art keywords
chunks
packet
data
header
payload
Prior art date
Application number
PCT/US2020/027876
Other languages
French (fr)
Inventor
Kiran MAKHIJANI
Lijun Dong
Cedric Westphal
Renwei Li
Hamed YOUSEFI
Original Assignee
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurewei Technologies, Inc. filed Critical Futurewei Technologies, Inc.
Publication of WO2020210780A1 publication Critical patent/WO2020210780A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/34Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/08Arrangements for detecting or preventing errors in the information received by repeating transmission, e.g. Verdan system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/12Arrangements for detecting or preventing errors in the information received by using return channel
    • H04L1/16Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
    • H04L1/18Automatic repetition systems, e.g. Van Duuren systems
    • H04L1/1829Arrangements specially adapted for the receiver end
    • H04L1/1835Buffer management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/12Arrangements for detecting or preventing errors in the information received by using return channel
    • H04L1/16Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
    • H04L1/18Automatic repetition systems, e.g. Van Duuren systems
    • H04L1/1829Arrangements specially adapted for the receiver end
    • H04L1/1835Buffer management
    • H04L1/1845Combining techniques, e.g. code combining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L2001/0092Error control systems characterised by the topology of the transmission link
    • H04L2001/0097Relays

Definitions

  • This disclosure generally relates to data transmission in a network.
  • Network transport control methods are responsible for reliable and in-order delivery of data from a sender to a receiver through various network nodes. Using current control methods, any error due to link congestion or intermittent packet loss in the network can trigger re-transmission of data packets. This results in unpredictable delays as well as an increase in the network load, wasting network resources and capacity.
  • a packet is the fundamental unit upon which different actions such as classification, forwarding, or discarding are performed by the network nodes.
  • Different schemes have been proposed to improve the efficiency of data transmissions and increase predictability, some of which are based on mechanisms for efficient and faster re transmissions, while others utilize redundant transmissions.
  • One general aspect includes a method of controlling data flows in a network, including: receiving a data packet having a qualitative service header and a data payload, the header defining a plurality of chunks of data in the payload, each chunk being a sub-set of the payload, the header identifying a relationship between each of the plurality of chunks in the payload.
  • the method of controlling data also includes determining that an adverse network condition impedes data flows in the network.
  • the method of controlling data also includes altering one or more of the plurality of chunks in the data packet to address the adverse network condition based on the relationship identified in the header.
  • Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
  • Implementations may include one or more of the following features.
  • the method may further include a method including the aforementioned steps and features where the method includes: an assigned significance to each chunk relative to other chunks in the payload.
  • the method may further include a method including the aforementioned steps and features where the relationship is a priority or other measure of significance between the chunks.
  • the method may further include a method including the aforementioned steps and features where the altering includes dropping one or more chunks from the data packet based on the relationship.
  • the method may further include a method including the aforementioned steps and features where the relationship is a priority and the method further including increasing a priority of any packet in which one or more chunks have been dropped.
  • the method may further include a method including the aforementioned steps and features where the method includes reading a header in the data packet, the header including control information and metadata, the control information selectively enabling the determining and altering steps.
  • the method may further include a method including the aforementioned steps and features where the control information includes a definition of the adverse network condition and a command to perform the dropping the one or more chunks based on the relationship identified in the header and the definition of the adverse network condition being true.
  • the method may further include a method including the aforementioned steps and features where the header includes: a command to implement the determining and altering.
  • the method may also include a condition defining when to implement the command.
  • the method may further include a method including the aforementioned steps and features where the method includes: a function that defines a relationship operation on the payload.
  • the method may further include a method including the aforementioned steps and features where the method includes: a threshold value beyond which the chunks cannot be further dropped.
  • the method may further include a method including the aforementioned steps and features where the function is a q-entropy function that when calculated determines a quality of a packet based on the number of chunks received.
  • the method may further include a method including the aforementioned steps and features where the header includes: a significance factor associated with the chunk as per the function.
  • the method may further include a method including the aforementioned steps and features where the method includes: an indicator of a location of the chunk in the payload.
  • the method may further include a method including the aforementioned steps and features where the method includes: a CRC for each chunk configure to verify an integrity of the chunk.
  • the method may further include a method including the aforementioned steps and features where the method includes: a flag to determine if the chunk was dropped.
  • the method may further include a method including the aforementioned steps and features where the method includes a header including control and metadata, and the method includes updating the header in the received packet to indicate that chunks are being dropped from the packet.
  • One general aspect includes a network node apparatus, including: a non-transitory memory storage including instructions; and one or more processors in communication with the memory, where the one or more processors execute the instructions to: receive a plurality of data packets, each data packet having a qualitative service header and a data payload, each header defining a plurality of chunks of data in the payload, each chunk being a sub-set of the payload, the header identifying a relationship between each of the plurality of chunks in each data payload; determine that an adverse network condition impedes data flows in the network; and drop one or more of the plurality of chunks in the data packet to address the adverse network condition based on the relationship identified in the header.
  • Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the one more processors execute instructions to assign a data significance to each chunk relative to other chunks in the payload including one of: a priority between the chunks, or an entropy function indicating the quality of the data in each packet.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the data packet includes a header including control information and metadata, the control information selectively enabling the one or more processors to determine and drop.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the header includes: a command configured to instruct the processor to implement the determine and drop.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the one more processors execute instructions to include a condition configured to enable when the processor should implement the command.
  • the network node apparatus may also include a function that defines a relationship operation for the processor on the data payload.
  • the network node apparatus may also include a threshold value beyond which the chunks cannot be further dropped.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the header includes: a significance factor associated with the chunk per the function.
  • the network node apparatus may include a network node apparatus having any of the foregoing features further including an indicator of a location of the chunk in the payload.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where further including a CRC for each chunk configured to allow the processor to verify an integrity of the chunk.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the one more processors execute instructions to include a flag configured to allow the processor to determine if the chunk was dropped.
  • the network node apparatus where the function is a q-entropy function that when calculated determines a determine a quality of a packet based on a number of chunks received.
  • the network node apparatus may include a network node apparatus having any of the foregoing features the data packet includes a header including control and metadata, and instructions updating the header in the received packet to indicate that chunks are being dropped from the packet.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the one more processors execute instructions to: notify a sender of the plurality of chunks of data in the payload that chunks are being dropped from the payload, and in response to the notifying, receiving any chunks which were not previously received.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the relationship includes an assigned significance to each chunk relative to other chunks in the payload.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the relationship includes a priority or other measure of significance between the chunks.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the altering includes dropping one or more chunks from the data packet based on the relationship.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the relationship is a priority and the method further including increasing a priority of any packet in which one or more chunks have been dropped.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the one or more processors execute instructions to perform the step of reading a header in the data packet, the header including control information and metadata, the control information selectively enabling the determining and altering steps.
  • the network node apparatus may include a network node apparatus having any of the foregoing features where the control information includes a definition of the adverse network condition and a command to perform the dropping the one or more chunks based on the relationship identified in the header and the definition of the adverse network condition being true.
  • the network node apparatus where the function is a q-entropy function that when calculated determines a quality of a packet based on the number of chunks received. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
  • One general aspect includes a non-transitory computer-readable medium storing computer instructions for controlling data in a network, that when executed by one or more processors, cause the one or more processors to perform the steps of: receiving a plurality of data packets at in intermediate network node, the data packets including application data output by the application, the plurality of data packets having a qualitative service header and a data payload, the qualitative service header defining a plurality of chunks of data in the data payload of each packet, each chunk being a sub-set of the data in the data payload, the header identifying a relationship between each of the plurality of chunks in the payload and indicating whether chunks in the payload contain all the application data output by the application; determining that an adverse network condition impedes data flows in the network; altering one or more of the plurality of chunks in one of more of the plurality of packets to address the adverse network condition based on at least one of the relationship between each of the plurality of chunks in the payload and the header indicating
  • Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the relationship includes assigning a significance to each chunk relative to other chunks in the payload including one of: a priority between the chunks, or an entropy function indicating the quality of the data in each packet.
  • Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where each data packet includes a header and where the header includes: a command configured to instruct the processor to implement the determine and drop.
  • Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where a condition is configured to enable when the processor should implement the command.
  • Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features which may also include a function that defines a relationship operation for the processor on the data payload. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features which may also include a threshold value beyond which the chunks cannot be further dropped. Implementations may include one the non-transitory computer- readable medium having any of the aforementioned features where the data packet includes a header and where the header includes: a significance factor associated with the chunk per the function. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features which may also include an indicator of a location of the chunk in the payload.
  • Implementations may include one the non-transitory computer- readable medium having any of the aforementioned features which may also include a CRC configured to allow the processor to verify an integrity of the chunk. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features which may also include a flag configured to allow the processor to determine if the chunk was dropped. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features which may also include an assigned significance to each chunk relative to other chunks in the payload. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the relationship is a priority or other measure of significance between the chunks.
  • Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the altering includes dropping one or more chunks from the data packet based on the relationship. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the computer instructions cause the one or more processors to perform the step of increasing a priority of any packet in which one or more chunks have been dropped. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the computer instructions cause the one or more processors to perform the step of reading a header in the data packet, the header including control information and metadata, the control information selectively enabling the determining and altering steps.
  • Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the control information includes a definition of the adverse network condition and a command to perform the dropping the one or more chunks based on the relationship identified in the header and the definition of the adverse network condition being true. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the function is a q-entropy function that when calculated determines a quality of a packet based on the number of chunks received.
  • Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the computer instructions cause the one or more processors to perform the step of notifying a sender of the plurality of chunks of data in the payload that chunks are being dropped from the payload, and in response to the notifying, receiving any chunks which were not previously received.
  • FIG. 1 illustrates an embodiment of a network environment suitable for use with the present technology.
  • FIG. 2 illustrates an embodiment of a network node comprising a router.
  • FIG. 3 illustrates a header and data format for a qualitative services data transmission framework.
  • FIG. 4 illustrates a method which may be implemented using the qualitative services data transmission framework described herein.
  • FIG. 5 is a flowchart illustrating a process which may be performed at any of network nodes to perform the packet wash technology described herein.
  • FIG. 6 graphically illustrates the effect of the process of FIG. 5.
  • FIG. 7 is a flowchart illustrating one embodiment of step 525 which determines whether to remove chunks from a given packet.
  • FIG. 8 illustrates a packet format to support qualitative services described herein.
  • FIG. 9 illustrates the packet format of FIG. 8 with additional detail.
  • FIG. 10 illustrates the operation of the packet wash operation using a BPP packet.
  • FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system
  • the present disclosure will now be described with reference to the figures, which in general relate to managing network traffic in order to improve network throughput and reliability and reduce latency.
  • the technology includes a data transmission framework which comprises application, transport, and network components, which improves the data flow through the network when adverse network conditions are encountered.
  • a first aspect of the technology includes methods and apparatus to provide qualitative services in a network system.
  • network transmissions in the form of data packets have a data payload broken into smaller logical units with each payload (called a“chunk”) having its own significance- or priority- factor describing its importance in the context of information carried in the data payload (or having a relationship with other chunks).
  • Qualitative services are applied at the data chunk level, rather than the data packet level.
  • the network transmissions may originate with an application transmitting the data.
  • the application identifies each chunk of the payload data and a qualitative context of the data.
  • the qualitative context is carried with the data in a packet header to each of any network nodes in the network system and a receiver device.
  • a transport layer mechanism manages congestion based on the qualitative context of the data.
  • One form of qualitative service comprises reducing the size of a packet while retaining as much information as possible by dropping lower-priority chunks from the data payload according to the information carried in the qualitative service header. Packets marked with chunks having a higher priority are scheduled for transmission earlier than those with lower or normal priorities. The dropped chunks may not be recovered, but some chunk information which remains may still be usable at the receiver device.
  • the present technology takes into account subjective quality of the packet itself, i.e., what aspects of a packet are relatively more significant than others. As the quality associated with chunks may vary from each other, such services in networks may be referred to as qualitative services. [0027] In another aspect, a packet format for implementing such qualitative services is provided. A data plane technology, called Big Packet Protocol (BPP), is used to implement qualitative services. BPP attaches meta-information or directives into packets, guiding intermediate routers on how to process the packets.
  • BPP Big Packet Protocol
  • FIG. 1 illustrates an embodiment of a network system 50 suitable for use with the present technology.
  • FIG. 1 illustrates a plurality of network enabled devices including, by way of example only, a mobile device 110, a computer 106 and a server 108. The devices are coupled via network data paths 100 and through several network nodes 106-1, 106-2 and 106- 3. Each node may be a router, switch or other network component which is operable to route network traffic in any number of transport control methods.
  • Each network enabled device 108, 110, 112 may include one or more applications 114, which generate network data which may be communicated to other network enabled devices The applications 114 may be executed by a processor utilizing volatile and non volatile memory to generate and forward the data, as well as communicate with the network interface 122 and qualitative services (QS) engine 125.
  • Each network enabled device 108, 110, 112 may include one or more network interfaces 122 allowing the device to communicate via the network data paths 100.
  • Each network interface 122 may include a QS engine 125 which may include code adapted to receive data described herein from the applications 114 on each device and perform certain aspects of the technology described herein.
  • Each network enabled device is configured to operate and/or communicate in the system 50 as a data sender or a data receiver.
  • network enabled device 108 is configured to transmit and/or receive data to/from any of the other network devices 110, 112.
  • the data paths 100 may be wireless signals or wired signals and thus the network system 50 of FIG. 1 may comprise a wired or a wireless network.
  • the environment may include additional or alternative networks including private and public data-packet networks, and corporate intranets.
  • Each network enabled device 108, 110, 112 represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device, wireless transmit/receive unit, mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, tablet, wireless sensor, wearable devices consumer electronics device, a target device, device-to-device (D2D) machine type user equipment or user equipment capable of machine-to-machine (M2M) communication, and USB dongles.
  • PDA personal digital assistant
  • D2D device-to-device
  • M2M machine-to-machine
  • Each network node 106-1 through 106-3 may likewise include a network interface 124 (or multiple network interfaces) and a QS routing engine 155 allowing the node to perform certain aspects of the technology.
  • the nodes 106-1 to 106-3 may comprise access points which may use technology such as defined by IEEE 802.11h or 802.1 lax to provide wireless network access to one or more devices
  • the term“access point” or“AP” is generally used in this document to refer to an apparatus that provides wireless communication to user equipment through a suitable wireless network, which may include a cellular network, and it will be understood that an AP may be implemented by a base station of a cellular network, and the AP may implement the functions of any network node described herein.
  • the nodes can similarly provide wired or wireless access network access through other networking technologies other than 802.11.
  • FIG. 1 illustrates one example of a communication system
  • the communication system 100 could include any number of user equipment, access points, networks, or other components in any suitable configuration.
  • FIG. 2 illustrates an embodiment of a network node which may implement a router.
  • the node e.g., a router
  • the node 200 may be, for example, a node 106 or any other node or router as described above in communication system 100.
  • the node 200 may comprise a plurality of input/output ports 210/230 and/or receivers (Rx) 212 and transmitters (Tx) 232 for receiving and transmitting data from other nodes, a processor 220 to process data and determine which node to send the data to and a memory.
  • the node 200 may also generate and distribute data in the form of data packets in the communication system.
  • the processor 220 is not so limited and may comprise multiple processors.
  • the processor 220 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. Moreover, the processor 220 may be implemented using hardware, software, or both.
  • the memory 222 may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein. Although illustrated as a single memory, memory 222 may be implemented as a combination of read only memory (ROM), random access memory (RAM), or secondary storage (e.g., one or more disk drives or tape drives used for non-volatile storage of data). In one embodiment, memory 222 stores code that enables the processor 220 to implement the QS routing engine 155, and encoder/decoder 185. Memory 22 may include reserved space to implement the coded chunk cache 195.
  • the QS routing engine 155 is in the form of code which is operable to instruct the processor 220 to perform those tasks attributed to the QS routing engine 155.
  • the technology described above may also be implemented on any general-purpose network component, such as a computer or network component with sufficient processing power, memory resources, and network throughput capability to handle the necessary workload placed upon it.
  • the present technology breaks down the packet into smaller logical units with each unit (called a“chunk”) having its own significance- or priority- factor describing its importance in the context of information carried in the data payload (or having a relationship with other chunks).
  • Each of the QS engines 125 and QS routing engines 155 in a system 50 can manipulate and modify data packets transmitted in the system at the chunk level by removing chunks of data from the data payload of each packet as needed to address adverse network conditions.
  • Each network enabled device’s applications e.g. applications 114) create packetized network flows.
  • the network nodes 106, and in particular the QS routing engines 155, perform qualitative service packet manipulation in the network environment 50.
  • Packet wash is a packet scrubbing operation that reduces the size of a packet while retaining as much information as possible. It operates by dropping chunks from the data payload to address the adverse network condition.
  • Each node 106 selectively drops parts of the packet payload to reduce packet size and alleviate congestion while forwarding the remainder of the packet to its destination.
  • Each network node 106 makes decisions to drop chunk(s) based on the chunk priority (or significance)
  • FIG. 3 illustrates a general data format and FIG. 4 a flowchart of a method, respectively, for the qualitative services data transmission framework described herein.
  • FIG. 3 illustrates a generic, IP packet 302, relative to a qualitative services IP packet 300. While the technology will be described with respect to its use based on an Internet protocol (IP) packet structure, it should be understood that other transport formats may be modified in accordance with the teachings herein.
  • IP Internet protocol
  • a qualitative services IP packet 300 includes a Qualitative Service (QS) header or QS header and a data payload.
  • the QS header includes an indication that the packet supports qualitative services.
  • the QS header is used to identify the payload structure (the“data” in FIG. 3) as comprising logical chunks. In one embodiment, this comprises using a QS bit in the QS header.
  • Each chunk of data (Data Chunk 1, Data Chunk 2, Data Chunk 3... Data Chunk N) is identified in the QS header.
  • positions of the data chunks are identified by offsets (in the priority level (PL) offsets field) specified in the QS header.
  • the QS header may include a checksum or cyclic redundancy check (CRC) for different chunks so that the integrity of the packet can still be verified even after QS packet drop operation.
  • CRC cyclic redundancy check
  • packet-level checks may be disabled and replaced by chunk- level checks.
  • each chunk has a significance-factor associated with it.
  • three priority levels are shown: High, Medium, and Low. While three priority levels are illustrated, it should be recognized that any number of priority levels may be utilized in accordance with the technology.
  • the significance factor is assigned by the transmitting application on a sender network enabled device.
  • the QS header indicates the relative data priority or significance assigned by the data transmitting application in the PL offsets field. For example, the QS header can indicate different significance or priority levels of each chunk in the payload, and how to identify the payload chunks associated with these priority levels (i.e. by using an offset). In one embodiment, to identify the chunks of data, the QS header specifies a specific offset for each chunk.
  • any suitable means of communicating the location and size of data chunks in the data of packet 300 are suitable for use in accordance with the present technology.
  • the significance information may be associated with each chunk and may be used by the QS routing engines 155 in each of the network nodes 106-1 through 106 -3 to manage traffic.
  • (termed“packet wash” herein) lower significance chunks may be dropped by a network node in the case of a network condition, such as congestion, being detected.
  • each QS routing engine 155 may drop lower priority chunks in low priority packets first (until low priority packets are entirely dropped), then lower priority chunks in medium priority packets, etc., depending on the priority of the chuck and that of the packet, as described below. While three priority levels are shown, the number of priority levels may be based on the network and application. For instance, in a data center, it is sometimes beneficial to cut the payload and only forward the header. This is due to the use of shallow buffers in order to speed up communications. In networks where buffers would fill up more slowly, more priority levels can be supported.
  • the QS header can define the significance in terms of a significance function that assigns the different significance or priority to each chunk of data. This could be explicit (as in the High/Medium/Low shown in FIG. 3, where the significance is embedded in one of three levels) or implicit.
  • FIG. 4 illustrates a method performed by the application (i.e. application 114) in conjunction with, for example, a QS engine 125, to ready data for transmission via a network environment 100.
  • an application on a network enabled device (108, 110, 112) outputs a data stream organized initially as IP packets and identifies within the stream, a plurality of chunks, each including a subset of the data in each packet, and an indication of the location and the significance or priority of chunks of data in the data stream.
  • Each IP packet may have a priority set by, for example, a type of service (TOS) precedence field or Differentiated Services Code Point (DSCP) field, as defined by the application.
  • TOS type of service
  • DSCP Differentiated Services Code Point
  • each QS engine 125 calculates the number of chunks which can be forwarded per packet based on the data provided by the application and communicates the packet organization to the application. Alternatively, data in each packet can be pre-marked by an application prior to chunk calculation.
  • chunks are assigned to each packet by, for example, the QS engine 125.
  • the significance or priority information for the chunks assigned to each packet is encoded into the QS header of each packet and a packet priority is assigned per the IP protocol used to transmit the packets.
  • FIGs. 5 and 6 illustrate one form of qualitative service comprising a“packet wash” service which addresses a network issue such as network congestion.
  • Packet wash is a packet scrubbing operation that reduces the size of a packet while retaining as much information as possible. It operates by dropping lower-priority chunks from the data payload according to the information carried in the QS header, helping the network node acting as a data forwarder to understand the significance of (or the relationship between) the chunks. Packet wash ensures that packets are less likely to be dropped entirely by instead dropping portions (or chunks) of the packets, and chunks having a higher priority are less likely to be dropped than those with lower or normal priorities. The dropped chunks might not be recovered, but some chunk information which remains may still be usable at the receiver device. Using the information encoded into each QS header, under adverse network conditions such as resource congestion, lower priority (or less significant) chunks may be dropped.
  • Each node 106 selectively drops parts of the packet payload to reduce packet size and alleviate congestion while forwarding the remainder of the packet to its destination.
  • Each network node 106 understands the significance or relationship of the chunks and accordingly makes decisions to drop chunk(s) based on the current situation, such as congestion level, the priority carried in the packet, etc. Chunks with higher significance are less likely to be dropped when qualitative services are applied in the network. As an example, for video streaming, the sender could rearrange the bits in the payload such that the first consecutive chunks contain the base layer, while the next chunks contain the enhancement layers. Thus, in case of congestion, a forwarding node can intentionally remove as many of the chunks containing enhancement layers as necessary.
  • a packet QS header of FIG. 3 may specify: (1) a function through which network nodes treat a packet; (2) a chunk-dependent significance parameter understood by this function; (3) the threshold beyond which a packet cannot be further degraded; and (4) the network condition when it is to be treated. This collectively defines q-entropy.
  • the function is applied when a condition (such as network congestion) is met, and the degradation threshold T has not been reached yet. If the threshold T has been reached, the payload cannot be further reduced because it will be rendered unusable.
  • FIG. 5 is a flowchart illustrating a process which may be performed at any of network nodes 106-1, 106 - 2, 106 - 3 by the QS routing engine 155.
  • a packet including, for example, a QS header and data chunks is received at a network node.
  • the input buffer may be a dedicated hardware buffer or a dedicated portion of memory 222 of FIG. 2.
  • an initial determination is made as to whether there is an issue with the network. In one embodiment, application of a packet wash to a data stream is conditioned upon an (adverse) issue or condition existing in the network.
  • Such issue may comprise a limitation on bandwidth in the network or network congestion which impedes the network’s ability to achieve maximum throughput. If there is no issue with the network at 515, once a check that all chunks have been received has passed at 540, then optionally at 545, an acknowledgment of the packet may be returned to the sender (to maintain operational equivalence with existing TCP/IP standards), and at 550 the packet is passed on to its next routing hop. Alternatively, no acknowledgment that the packet was received need be provided, and in yet another alternative, such an acknowledgment need only be sent when the packet is not received, as described below at 590.
  • Step 525 comprises an analysis of the type of adverse issue affecting the network, and a calculation of how much data needs to be removed from the packets to alleviate the issue successfully and need not be limited to a calculation for one packet.
  • the method first looks to the packet level priority, and then the chunk priority within each packet at that level. For example, if the issue is network congestion, it may be necessary to drop more than one chunk in a packet, or chunks in successive packets, in order to reduce the latency of packet transfer in successive nodes. In one embodiment, as discussed below with respect to FIG. 7A, this calculation may consider multiple packets at the network node, including packets which may be present in an input queue of a network node.
  • NACK negative acknowledgement
  • each intermediate node uses packet wash as a way to avoid packet drops due to congestion. So retransmission and wait times are avoided (unless the packet had to be really dropped) and qualitative services minimize packet retransmissions because intermediate nodes avoid dropping packets due to congestion through QS treatment.
  • acknowledgment may be sent at 545 indicating to the sender that the congestion exists and the sender may determine to adjust its configuration of payload data, using smaller or fewer chunks in each packet. If the remaining data is worth sending at 530, then at 555, the QS routing engine 125 in the particular node 106 operating on the packet may rewrite the priority at the packet level to, for example, effectively increase the priority of the remaining lower priority chunks (for example, via a higher priority of the entire packet) so that at the next hop of the packet, the packet has a greater chance of passing through the next hop without having additional chunks removed from the packet. At 560, the packet is then forwarded with the remaining chunks and rewritten priority of the packet.
  • the method of FIG. 5 is repeated at any network node having a QS routing engine 125.
  • the received packet at such hop will certainly carry less information than the original payload from the sender.
  • the packet will reach its destination receiver.
  • the Q-entropy function on a node helps determine the quality of the packet.
  • a packet sent may not always be same as the packet received. Therefore, it is important to determine if the received packet is usable (if received packet was treated qualitatively) and this determination is performed by calculating a q-entropy function.
  • the operation of the Q-entropy function itself may vary from application to application.
  • one operation may be“equal-weight-trimming” in which all the chunks are of equal value but are treated qualitatively by dropping chunks from the end of the payload. For example, if 2 of 5 chunks with equal priority are dropped in transit, its quality is 0.6 (1 - 0.2*2) and the Q-entropy function has a threshold value that indicates that packet is usable as long as value is greater than or equal to 0.5. In a second example, if chunk priority based trimming is used, then the chunks with lower priority are dropped.
  • the quality factor would be 0.8 computed as (0.2 + 0.2 +0.3). If the Q-entropy threshold is 0.7 or higher, then the packet is usable. Thus the Q-entropy function has a threshold and uses parameters in the QS header to compute the quality. In addition, by aggregating this over the entire flow, the overall quality of the flow is appropriately determined so that the network operators can adjust the resources, or the receiver can send feedback to the sender.
  • FIG. 6 graphically illustrates the effect of the process of FIG. 5. Illustrated in FIG. 6 is the transmission of a packet N through different network nodes 106-1, 106-2, 106-3.
  • packet N arrives at network node 106 - 1 (at 610a), it arrives with 12 chunks of data 625.
  • a first subset of chunks 615 in Packet N may be classified as high-priority.
  • a second subset of chunks 620 is classified as medium priority data.
  • a third subset of chunks 635 is classified as low priority data.
  • the classification of the chunks of data in the packet is performed by the applications 114 on a network device.
  • node 106-2 may remove one or more additional chunks from Packet N.
  • the condition for example, network congestion
  • node 106-2 may remove one or more additional chunks from Packet N.
  • one additional chunk is removed from packet N at node 106- 2 resulting in two remaining chunks in lower priority chunk block 635’ arriving at node 106- 3.
  • the decision to remove additional chunks may be dependent upon whether or not there is value in removing such chunks and forwarding on the remaining chunks in the packet as described at step 530, and as detailed below with respect to FIG. 7A.
  • Packet N has been reclassified as high-priority. To the extent that packet N is routed further, it will have a higher overall priority against additional chunks being removed. It should be recognized that while the example of FIG. 6 illustrates one chunk being removed at each node, any number of different chunks can be removed from a packet and as it traverses through the network system.
  • the qualitative packet wash technology performs selective trimming of a payload from less to higher significant chunks. Accordingly, each network forwarding node decides on which packet to trim and for this packet, which chunk(s) to trim. Until the network conditions improve, the receivers receive lesser quality streams. This may become undesirable over a period of time.
  • the receiver can check the modified header of a washed packet and trigger an adaptive congestion-control by notifying the sender about the level of congestion in the network. As noted with respect to FIG. 5, it can send an acknowledgment with a quality of packet value, i.e. an indication of a number of chunks that were dropped which the sender uses to alter its transmission rate in order to avoid any further drops.
  • the sender can then attempt or choose to gradually increase the rate by adding chunks and determine whether the network can deliver at this rate without any loss.
  • the sender adapts to data rates in the network. This is different from the traditional transport mechanisms where a sender waits for packet loss and requires a re-transmission of an entire packet. In the present technology, the sender need not wait for determination of entire packet loss, nor does it require notification to retransmit all of the data.
  • adaptive rate control utilizes network resources more effectively. Moreover, it significantly reduces data delivery delay by partially delivering the packets as well as dynamically managing packet sizes, which is critically important in emerging real-time applications.
  • This use of qualitative services improves network efficiency and fairness among users.
  • the forwarding network node should trim the intact packet as in the case of nodes 106-1 and 106-2 in the example shown in FIG. 6.
  • the forwarding node may give it a higher priority (such as that which occurs in the nodes 106-2 and 106-3 of FIG. 6).
  • the packet wash chunk trimming operation need not be to only tail- (or packet end) drops— a chunk may be removed from anywhere within the payload since this provides higher flexibility for applications to categorize significance. Tail drops of chunks allow the forwarding network nodes to lower overhead because then the amount of buffer shift is minimized. Therefore, each trimming approach chosen by applications (114) should consider the trade-off between performance and flexibility. Nevertheless, in some embodiments the chunks can be arranged in order of priority, such that tail drops follow the desired priority.
  • FIG. 7 is a flowchart illustrating one embodiment of step 525 which determines whether to remove chunks from a given packet.
  • a determination will be made as to the total number of chunks which need to be dropped in order to alleviate the adverse network condition.
  • the total number of chunks may be a calculation which is derived from a total amount of bytes over a number of packets which need to be removed.
  • Each network node may have enqueued a number of additional packets which are waiting to be processed.
  • Each of these packets may likewise have a priority based on having had chunks removed from the packets, or not having chunks removed from the packets.
  • a determination of the priority of the packet is made, and for each lowest priority packet, the significance of the remaining chunks is made.
  • the method determines whether there are other packets in the queue having lower packet level priority with lower priority chunks. If so, then at 730 the total number of chunks from the lowest chunks of the lowest priority packets in the input cues are removed. If not, then the lowest priority packets in the current packet are selected for removal at 725.
  • step 530 can be performed though a function, termed herein as a Q-entropy function.
  • step 530 defines a quality threshold beyond which a packet cannot be further degraded. The determination at 530 may vary from application to application.
  • Step 530 determines whether, for chunks dropped in transit, the remaining quality of chunks in a data is above a threshold value that indicates that packet is usable.
  • FIGs. 8 and 9 A packet framework for qualitative services (and in particular packet washing) using a format referred to herein and in literature as Big Packet Protocol (BPP) is presented. The packet framework is generally illustrated with respect to FIG. 8, with a particular implementation of the command and meta data blocks illustrated with respect to FIG. 9.
  • BPP is a programmable data plane technology compatible with IP networks.
  • BPP one may attach meta-information or directives using BPP blocks into packets. This meta-information provides guidance to intermediate routers about processing those packets.
  • the BPP block shown in FIGs. 8 and 9, allows per-packet behavior for functionality such as in-band per- packet signaling facilities, per-flow path selections, and network-level operator decisions.
  • BPP is useful for implementing qualitative services and in particular the packet washing technology described herein.
  • a qualitative packet can be represented by a BPP contract consisting of packet wash directive which will have significance-factors as its meta-data. By doing so, network nodes remain unaware of the user-payload, and wash packets only as prescribed by the application(s) 114.
  • FIG. 8 illustrates a general structure of an ethernet encapsulated, BPP packet.
  • the packet 800 is encapsulated within an ethernet frame 802, and the ethernet frame 802 includes an indication 804 that the ethernet type is a BPP -type, protocol packet, including BPP header 810 and data payload 850.
  • the BPP header 810 is subdivided into an IP header (or pseudo header) and a BPP block which contains a BPP header, command block and metadata block.
  • the BPP Block format of FIG. 9 contains the following parameters: (1) a command “PacketWash” enabling each network node to perform the packet wash process.; (2) a condition (i.e.
  • Qf a Q-entropy function that defines an operation on the payload; e.g., a priority, binary, or step function; (4) Qthreshoid— a threshold value beyond which the chunks cannot be further dropped; and (4) information about each chunk CHi, including: (a) SIGi— a significance-factor associated with the chunk as per the function; e.g., priority order, or binary 0 or 1 bit; (b) Offi - an offset to describe the location of the chunk in the payload; (c) CRCi— a CRC to verify the integrity of the chunk; and (d) OFi - - a flag to determine if the chunk was dropped (which helps data receivers know which chunks have been dropped in the network).
  • SIGi a significance-factor associated with the chunk as per the function; e.g., priority order, or binary 0 or 1 bit
  • Offi - an offset to describe the location of the chunk in the payload
  • CRCi a CRC to verify the integrity of
  • conditional-directive Not all BPP-commands are conditional, but packet wash can be a conditional directive to be applied after determining that the network state is adverse, and it is likely that the packet will not reach the receiver.
  • latency-constraint If it is determined that a qualitative packet will arrive late at the destination even after qualitative treatment or at the cost of processing, then it can be dropped.
  • FIG. 10 illustrates the operation of the packet wash operation using a BPP packet.
  • the process of FIG. 10 is performed on the network node.
  • a determination is made as to whether the patent under analysis is one of the lowest packet priority in the input queue. If not, at 1015, the process of FIG. 10 is run on another, lower packet priority packet at 1015.
  • the QS routing engine 155 extracts packet wash command and checks for the condition.
  • a condition such as congestion can be determined by checking, for example, if an egress queue of a network node 106 is above a threshold, such as being equal to or greater than 90% full. If at 1020, the condition is not true, the packet is treated in accordance with the packet transport protocol in use at 1030.
  • the packet is forwarded. If at 1020, the condition is true, then at 1040, a determination is made as to whether not the packet which has arrived is in fact a QS packet. If not, the packet is treated in accordance with the packet transport protocol at 1030 and in such case, the packet may be dropped. If, at 1040, the packet is a QS packet, at 1060 the QS routing engine 155 applies the function in q-entropy to parameters of each chunk in the payload. For example, if a function is binary, parameters have value 0 or 1.
  • the output gives the chunk offset(s) to be dropped by the QS routing engine 155, dropping a number of bytes from the offsets of resulting (to-be dropped) chunks and marks them dropped in the header (OF field).
  • a determination is made as to whether the degraded packet exceeds the qualitative threshold. If yes, then the packet is dropped at 1050. If the degraded packet does not exceed the qualitative threshold, then at 1080, a determination is made as to whether the packet will still arrive late at its destination. If not, the QS routing engine forwards it to the next network node.
  • the qualitative service functions in the present technology ensure that packets marked with higher priority are scheduled earlier than lower or normal priorities. Consequently, under adverse network conditions such as resource congestion, the lower priority packets or chunks may be dropped.
  • re-transmission of packets can waste network resources, reduce the overall throughput, and cause both longer and unpredictable delay in packet delivery. Not only does the re-transmitted packet have to travel part of the routing path twice, but the sender would not realize the packet has been dropped until timeout or negative- acknowledgement happens, which also adds to the extended waiting time at the sender side before the re-transmission is initiated.
  • the current approach of handling the packet error or network congestion, which discards the packet entirely, is not effective.
  • FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system.
  • the general-purpose network component or computer system 1100 includes a processor 1102 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 1104, and memory, such as ROM 1106 and RAM 1108, input/output (I/O) devices 1110, and a network 1112, such as the Internet or any other well-known type of network, that may include network connectively devices, such as a network interface.
  • a processor 1102 is not so limited and may comprise multiple processors.
  • the processor 1102 may be implemented as one or more CPU chips, cores (e.g., a multi-core processor), FPGAs, ASICs, and/or DSPs, and/or may be part of one or more ASICs.
  • the processor 1102 may be configured to implement any of the schemes described herein.
  • the processor 1102 may be implemented using hardware, software, or both.
  • the secondary storage 1104 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if the RAM 1108 is not large enough to hold all working data.
  • the secondary storage 1104 may be used to store programs that are loaded into the RAM 1108 when such programs are selected for execution.
  • the ROM 1106 is used to store instructions and perhaps data that are read during program execution.
  • the ROM 1106 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of the secondary storage 1104.
  • the RAM 1108 is used to store volatile data and perhaps to store instructions. Access to both the ROM 1106 and the RAM 1108 is typically faster than to the secondary storage 1104.
  • At least one of the secondary storage 1104 or RAM 1108 may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein.
  • the qualitative service as a native feature of the networks has the following benefits: (a) a packet re-transmission may not be needed if the receiver has the capability to comprehend what is left in the packet after removal of certain chunks from the payload by the intermediate network nodes, and the receiver can recover as much information as needed. In this case, the receiver can acknowledge the acceptance of the packet, while it may also indicate to the sender that it was partially dropped in the network. Network resource usage can be tremendously reduced and better prioritized for the delivery of other packets; and (b) the latency of packet delivery can be significantly reduced due to the absence of re-transmissions.
  • Some of the information contained in the original packet can be recovered by the receiving node, as long as some recovery algorithms or methods are agreed and known in advance by the sender, the forwarding nodes, and the receiver.
  • the algorithms and methods can be carried along with the packet, such that it can be detected and executed by the intermediate network nodes, and revealed to the receiver, which can carry out the reverse operation to recover some or all the information contained in the packet.
  • the technology includes a means of controlling data flows in a network.
  • the means for controlling includes means (106) for receiving a data packet (300) having a qualitative service header and a data payload, the header defining a plurality of chunks (CH) of data in the payload, each chunk being a sub-set of the payload, the header identifying a relationship between each of the plurality of chunks in the payload.
  • the means for controlling includes means for determining (155) that an adverse network condition impedes data flows in the network and for altering one or more of the plurality of chunks in the data packet to address the adverse network condition based on the relationship identified in the header.
  • the means for determining further includes means for dropping one or more chunks from the data packet based on the relationship, wherein the one or more chunks dropped is a minimum number of chunks needed to address the adverse network condition.
  • the technology described herein can be implemented using hardware, firmware, software, or a combination of these.
  • the software or firmware used can be stored on one or more processor readable storage devices to program one or more of the blocks of FIG. 2 to perform the functions described herein.
  • the processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and non removable media.
  • computer readable media may comprise computer readable storage media and communication media.
  • Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information, and which can be accessed by the components described above.
  • a computer readable medium or media does (do) not include propagated, modulated or transitory signals.
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
  • some or all of the software or firmware can be replaced by dedicated hardware logic components.
  • illustrative types of hardware logic components include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc.
  • FPGAs Field-programmable Gate Arrays
  • ASICs Application-specific Integrated Circuits
  • ASSPs Application-specific Standard Products
  • SOCs System-on-a-chip systems
  • CPLDs Complex Programmable Logic Devices
  • special purpose computers etc.
  • software stored on a storage device
  • the one or more processors can be in communication with one or more computer readable media/ storage devices, peripherals and/or communication interfaces.
  • each process associated with the disclosed technology may be performed continuously and by one or more computing devices.
  • Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method and apparatus for controlling data flows in a network. A data packet having a qualitative service header and a data payload is received by the method and apparatus. The header defines a plurality of chunks of data in the payload, each chunk being a sub-set of the payload, the header identifying a relationship between each of the plurality of chunks in the payload. When an adverse network condition impedes data flows in the network, the method and apparatus alter one or more of the plurality of chunks in the data packet to address the adverse network condition based on the relationship identified in the header. The altering may comprise dropping one or more chunks from the data packet based on the relationship, wherein the one or more chunks dropped is a minimum number of chunks needed to address the adverse network condition.

Description

CHUNK BASED NETWORK QUALITATIVE SERVICES
CLAIM OF PRIORITY
[0001] This application claims the benefit of U.S. Provisional Patent Application Number 62/833,014 filed on April 12, 2019, U.S. Provisional Patent Application Number 62/833,129 filed on April 12, 2019 and U.S. Provisional Patent Application Number 62/834,730 filed on April 16, 2019.
FIELD
[0002] This disclosure generally relates to data transmission in a network.
BACKGROUND
[0003] Network transport control methods are responsible for reliable and in-order delivery of data from a sender to a receiver through various network nodes. Using current control methods, any error due to link congestion or intermittent packet loss in the network can trigger re-transmission of data packets. This results in unpredictable delays as well as an increase in the network load, wasting network resources and capacity. For packet-based network architectures, a packet is the fundamental unit upon which different actions such as classification, forwarding, or discarding are performed by the network nodes. Different schemes have been proposed to improve the efficiency of data transmissions and increase predictability, some of which are based on mechanisms for efficient and faster re transmissions, while others utilize redundant transmissions.
SUMMARY
[0004] One general aspect includes a method of controlling data flows in a network, including: receiving a data packet having a qualitative service header and a data payload, the header defining a plurality of chunks of data in the payload, each chunk being a sub-set of the payload, the header identifying a relationship between each of the plurality of chunks in the payload. The method of controlling data also includes determining that an adverse network condition impedes data flows in the network. The method of controlling data also includes altering one or more of the plurality of chunks in the data packet to address the adverse network condition based on the relationship identified in the header. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
[0005] Implementations may include one or more of the following features. The method may further include a method including the aforementioned steps and features where the method includes: an assigned significance to each chunk relative to other chunks in the payload. The method may further include a method including the aforementioned steps and features where the relationship is a priority or other measure of significance between the chunks. The method may further include a method including the aforementioned steps and features where the altering includes dropping one or more chunks from the data packet based on the relationship. The method may further include a method including the aforementioned steps and features where the relationship is a priority and the method further including increasing a priority of any packet in which one or more chunks have been dropped. The method may further include a method including the aforementioned steps and features where the method includes reading a header in the data packet, the header including control information and metadata, the control information selectively enabling the determining and altering steps. The method may further include a method including the aforementioned steps and features where the control information includes a definition of the adverse network condition and a command to perform the dropping the one or more chunks based on the relationship identified in the header and the definition of the adverse network condition being true. The method may further include a method including the aforementioned steps and features where the header includes: a command to implement the determining and altering. The method may also include a condition defining when to implement the command. The method may further include a method including the aforementioned steps and features where the method includes: a function that defines a relationship operation on the payload. The method may further include a method including the aforementioned steps and features where the method includes: a threshold value beyond which the chunks cannot be further dropped. The method may further include a method including the aforementioned steps and features where the function is a q-entropy function that when calculated determines a quality of a packet based on the number of chunks received. The method may further include a method including the aforementioned steps and features where the header includes: a significance factor associated with the chunk as per the function. The method may further include a method including the aforementioned steps and features where the method includes: an indicator of a location of the chunk in the payload. The method may further include a method including the aforementioned steps and features where the method includes: a CRC for each chunk configure to verify an integrity of the chunk. The method may further include a method including the aforementioned steps and features where the method includes: a flag to determine if the chunk was dropped. The method may further include a method including the aforementioned steps and features where the method includes a header including control and metadata, and the method includes updating the header in the received packet to indicate that chunks are being dropped from the packet. The method may further include a method including the aforementioned steps and features where the method includes: notifying a sender of the plurality of chunks of data in the payload that chunks are being dropped from the payload, and in response to the notifying, receiving any chunks which were not previously received. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer- accessible medium.
[0006] One general aspect includes a network node apparatus, including: a non-transitory memory storage including instructions; and one or more processors in communication with the memory, where the one or more processors execute the instructions to: receive a plurality of data packets, each data packet having a qualitative service header and a data payload, each header defining a plurality of chunks of data in the payload, each chunk being a sub-set of the payload, the header identifying a relationship between each of the plurality of chunks in each data payload; determine that an adverse network condition impedes data flows in the network; and drop one or more of the plurality of chunks in the data packet to address the adverse network condition based on the relationship identified in the header. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
[0007] The network node apparatus may include a network node apparatus having any of the foregoing features where the one more processors execute instructions to assign a data significance to each chunk relative to other chunks in the payload including one of: a priority between the chunks, or an entropy function indicating the quality of the data in each packet. The network node apparatus may include a network node apparatus having any of the foregoing features where the data packet includes a header including control information and metadata, the control information selectively enabling the one or more processors to determine and drop. The network node apparatus may include a network node apparatus having any of the foregoing features where the header includes: a command configured to instruct the processor to implement the determine and drop. The network node apparatus may include a network node apparatus having any of the foregoing features where the one more processors execute instructions to include a condition configured to enable when the processor should implement the command. The network node apparatus may also include a function that defines a relationship operation for the processor on the data payload. The network node apparatus may also include a threshold value beyond which the chunks cannot be further dropped. The network node apparatus may include a network node apparatus having any of the foregoing features where the header includes: a significance factor associated with the chunk per the function. The network node apparatus may include a network node apparatus having any of the foregoing features further including an indicator of a location of the chunk in the payload. The network node apparatus may include a network node apparatus having any of the foregoing features where further including a CRC for each chunk configured to allow the processor to verify an integrity of the chunk. The network node apparatus may include a network node apparatus having any of the foregoing features where the one more processors execute instructions to include a flag configured to allow the processor to determine if the chunk was dropped. The network node apparatus where the function is a q-entropy function that when calculated determines a determine a quality of a packet based on a number of chunks received. The network node apparatus may include a network node apparatus having any of the foregoing features the data packet includes a header including control and metadata, and instructions updating the header in the received packet to indicate that chunks are being dropped from the packet. The network node apparatus may include a network node apparatus having any of the foregoing features where the one more processors execute instructions to: notify a sender of the plurality of chunks of data in the payload that chunks are being dropped from the payload, and in response to the notifying, receiving any chunks which were not previously received. The network node apparatus may include a network node apparatus having any of the foregoing features where the relationship includes an assigned significance to each chunk relative to other chunks in the payload. The network node apparatus may include a network node apparatus having any of the foregoing features where the relationship includes a priority or other measure of significance between the chunks. The network node apparatus may include a network node apparatus having any of the foregoing features where the altering includes dropping one or more chunks from the data packet based on the relationship. The network node apparatus may include a network node apparatus having any of the foregoing features where the relationship is a priority and the method further including increasing a priority of any packet in which one or more chunks have been dropped. The network node apparatus may include a network node apparatus having any of the foregoing features where the one or more processors execute instructions to perform the step of reading a header in the data packet, the header including control information and metadata, the control information selectively enabling the determining and altering steps. The network node apparatus may include a network node apparatus having any of the foregoing features where the control information includes a definition of the adverse network condition and a command to perform the dropping the one or more chunks based on the relationship identified in the header and the definition of the adverse network condition being true. The network node apparatus where the function is a q-entropy function that when calculated determines a quality of a packet based on the number of chunks received. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
[0008] One general aspect includes a non-transitory computer-readable medium storing computer instructions for controlling data in a network, that when executed by one or more processors, cause the one or more processors to perform the steps of: receiving a plurality of data packets at in intermediate network node, the data packets including application data output by the application, the plurality of data packets having a qualitative service header and a data payload, the qualitative service header defining a plurality of chunks of data in the data payload of each packet, each chunk being a sub-set of the data in the data payload, the header identifying a relationship between each of the plurality of chunks in the payload and indicating whether chunks in the payload contain all the application data output by the application; determining that an adverse network condition impedes data flows in the network; altering one or more of the plurality of chunks in one of more of the plurality of packets to address the adverse network condition based on at least one of the relationship between each of the plurality of chunks in the payload and the header indicating that chunks in the payload contain less than all the application data output by the application. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
[0009] Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the relationship includes assigning a significance to each chunk relative to other chunks in the payload including one of: a priority between the chunks, or an entropy function indicating the quality of the data in each packet. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where each data packet includes a header and where the header includes: a command configured to instruct the processor to implement the determine and drop. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where a condition is configured to enable when the processor should implement the command. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features which may also include a function that defines a relationship operation for the processor on the data payload. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features which may also include a threshold value beyond which the chunks cannot be further dropped. Implementations may include one the non-transitory computer- readable medium having any of the aforementioned features where the data packet includes a header and where the header includes: a significance factor associated with the chunk per the function. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features which may also include an indicator of a location of the chunk in the payload. Implementations may include one the non-transitory computer- readable medium having any of the aforementioned features which may also include a CRC configured to allow the processor to verify an integrity of the chunk. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features which may also include a flag configured to allow the processor to determine if the chunk was dropped. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features which may also include an assigned significance to each chunk relative to other chunks in the payload. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the relationship is a priority or other measure of significance between the chunks. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the altering includes dropping one or more chunks from the data packet based on the relationship. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the computer instructions cause the one or more processors to perform the step of increasing a priority of any packet in which one or more chunks have been dropped. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the computer instructions cause the one or more processors to perform the step of reading a header in the data packet, the header including control information and metadata, the control information selectively enabling the determining and altering steps. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the control information includes a definition of the adverse network condition and a command to perform the dropping the one or more chunks based on the relationship identified in the header and the definition of the adverse network condition being true. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the function is a q-entropy function that when calculated determines a quality of a packet based on the number of chunks received. Implementations may include one the non-transitory computer-readable medium having any of the aforementioned features where the computer instructions cause the one or more processors to perform the step of notifying a sender of the plurality of chunks of data in the payload that chunks are being dropped from the payload, and in response to the notifying, receiving any chunks which were not previously received. Implementations may include one the non- transitory computer-readable medium having any of the aforementioned features where the data packet includes a header including control and metadata, and the method includes updating the header in the received packet to indicate that chunks are being dropped from the packet. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
[0010] According to one aspect of the present disclosure, This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate elements.
[0012] FIG. 1 illustrates an embodiment of a network environment suitable for use with the present technology.
[0013] FIG. 2 illustrates an embodiment of a network node comprising a router.
[0014] FIG. 3 illustrates a header and data format for a qualitative services data transmission framework.
[0015] FIG. 4 illustrates a method which may be implemented using the qualitative services data transmission framework described herein.
[0016] FIG. 5 is a flowchart illustrating a process which may be performed at any of network nodes to perform the packet wash technology described herein.
[0017] FIG. 6 graphically illustrates the effect of the process of FIG. 5.
[0018] FIG. 7 is a flowchart illustrating one embodiment of step 525 which determines whether to remove chunks from a given packet. [0019] FIG. 8 illustrates a packet format to support qualitative services described herein. [0020] FIG. 9 illustrates the packet format of FIG. 8 with additional detail.
[0021] FIG. 10 illustrates the operation of the packet wash operation using a BPP packet.
[0022] FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system
DETAILED DESCRIPTION
[0023] The present disclosure will now be described with reference to the figures, which in general relate to managing network traffic in order to improve network throughput and reliability and reduce latency. The technology includes a data transmission framework which comprises application, transport, and network components, which improves the data flow through the network when adverse network conditions are encountered.
[0024] A first aspect of the technology includes methods and apparatus to provide qualitative services in a network system. To provide qualitative services, network transmissions in the form of data packets have a data payload broken into smaller logical units with each payload (called a“chunk”) having its own significance- or priority- factor describing its importance in the context of information carried in the data payload (or having a relationship with other chunks). Qualitative services are applied at the data chunk level, rather than the data packet level. The network transmissions may originate with an application transmitting the data. The application identifies each chunk of the payload data and a qualitative context of the data. The qualitative context is carried with the data in a packet header to each of any network nodes in the network system and a receiver device.
[0025] In a second aspect, a transport layer mechanism manages congestion based on the qualitative context of the data. One form of qualitative service comprises reducing the size of a packet while retaining as much information as possible by dropping lower-priority chunks from the data payload according to the information carried in the qualitative service header. Packets marked with chunks having a higher priority are scheduled for transmission earlier than those with lower or normal priorities. The dropped chunks may not be recovered, but some chunk information which remains may still be usable at the receiver device.
[0026] In contrast to current major service models in networking, the present technology takes into account subjective quality of the packet itself, i.e., what aspects of a packet are relatively more significant than others. As the quality associated with chunks may vary from each other, such services in networks may be referred to as qualitative services. [0027] In another aspect, a packet format for implementing such qualitative services is provided. A data plane technology, called Big Packet Protocol (BPP), is used to implement qualitative services. BPP attaches meta-information or directives into packets, guiding intermediate routers on how to process the packets.
[0028] FIG. 1 illustrates an embodiment of a network system 50 suitable for use with the present technology. FIG. 1 illustrates a plurality of network enabled devices including, by way of example only, a mobile device 110, a computer 106 and a server 108. The devices are coupled via network data paths 100 and through several network nodes 106-1, 106-2 and 106- 3. Each node may be a router, switch or other network component which is operable to route network traffic in any number of transport control methods.
[0029] Each network enabled device 108, 110, 112 may include one or more applications 114, which generate network data which may be communicated to other network enabled devices The applications 114 may be executed by a processor utilizing volatile and non volatile memory to generate and forward the data, as well as communicate with the network interface 122 and qualitative services (QS) engine 125. Each network enabled device 108, 110, 112 may include one or more network interfaces 122 allowing the device to communicate via the network data paths 100. Each network interface 122 may include a QS engine 125 which may include code adapted to receive data described herein from the applications 114 on each device and perform certain aspects of the technology described herein. Each network enabled device is configured to operate and/or communicate in the system 50 as a data sender or a data receiver. For example, network enabled device 108 is configured to transmit and/or receive data to/from any of the other network devices 110, 112. The data paths 100 may be wireless signals or wired signals and thus the network system 50 of FIG. 1 may comprise a wired or a wireless network. The environment may include additional or alternative networks including private and public data-packet networks, and corporate intranets.
[0030] Each network enabled device 108, 110, 112 represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device, wireless transmit/receive unit, mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, tablet, wireless sensor, wearable devices consumer electronics device, a target device, device-to-device (D2D) machine type user equipment or user equipment capable of machine-to-machine (M2M) communication, and USB dongles.
[0031] Each network node 106-1 through 106-3 may likewise include a network interface 124 (or multiple network interfaces) and a QS routing engine 155 allowing the node to perform certain aspects of the technology.
[0032] In some embodiments, for example, the nodes 106-1 to 106-3 may comprise access points which may use technology such as defined by IEEE 802.11h or 802.1 lax to provide wireless network access to one or more devices The term“access point” or“AP” is generally used in this document to refer to an apparatus that provides wireless communication to user equipment through a suitable wireless network, which may include a cellular network, and it will be understood that an AP may be implemented by a base station of a cellular network, and the AP may implement the functions of any network node described herein. The nodes can similarly provide wired or wireless access network access through other networking technologies other than 802.11.
[0033] Although FIG. 1 illustrates one example of a communication system, various changes may be made to FIG. 1. For example, the communication system 100 could include any number of user equipment, access points, networks, or other components in any suitable configuration.
[0034] FIG. 2 illustrates an embodiment of a network node which may implement a router. The node (e.g., a router) 200 may be, for example, a node 106 or any other node or router as described above in communication system 100. The node 200 may comprise a plurality of input/output ports 210/230 and/or receivers (Rx) 212 and transmitters (Tx) 232 for receiving and transmitting data from other nodes, a processor 220 to process data and determine which node to send the data to and a memory. The node 200 may also generate and distribute data in the form of data packets in the communication system. Although illustrated as a single processor, the processor 220 is not so limited and may comprise multiple processors. The processor 220 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. Moreover, the processor 220 may be implemented using hardware, software, or both. The memory 222 may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein. Although illustrated as a single memory, memory 222 may be implemented as a combination of read only memory (ROM), random access memory (RAM), or secondary storage (e.g., one or more disk drives or tape drives used for non-volatile storage of data). In one embodiment, memory 222 stores code that enables the processor 220 to implement the QS routing engine 155, and encoder/decoder 185. Memory 22 may include reserved space to implement the coded chunk cache 195.
[0035] In one embodiment, the QS routing engine 155 is in the form of code which is operable to instruct the processor 220 to perform those tasks attributed to the QS routing engine 155.
[0036] The technology described above may also be implemented on any general-purpose network component, such as a computer or network component with sufficient processing power, memory resources, and network throughput capability to handle the necessary workload placed upon it.
[0037] As noted above, the present technology breaks down the packet into smaller logical units with each unit (called a“chunk”) having its own significance- or priority- factor describing its importance in the context of information carried in the data payload (or having a relationship with other chunks).
[0038] Each of the QS engines 125 and QS routing engines 155 in a system 50 can manipulate and modify data packets transmitted in the system at the chunk level by removing chunks of data from the data payload of each packet as needed to address adverse network conditions. Each network enabled device’s applications (e.g. applications 114) create packetized network flows. The network nodes 106, and in particular the QS routing engines 155, perform qualitative service packet manipulation in the network environment 50.
[0039] One such qualitative service enabled by the present technology is termed“packet wash” herein. In a packet wash service, chunks may be dropped by a network node in the case of an adverse network condition, such as congestion, being detected. Packet wash is a packet scrubbing operation that reduces the size of a packet while retaining as much information as possible. It operates by dropping chunks from the data payload to address the adverse network condition. Each node 106 selectively drops parts of the packet payload to reduce packet size and alleviate congestion while forwarding the remainder of the packet to its destination. Each network node 106 makes decisions to drop chunk(s) based on the chunk priority (or significance)
[0040] FIG. 3 illustrates a general data format and FIG. 4 a flowchart of a method, respectively, for the qualitative services data transmission framework described herein. FIG. 3 illustrates a generic, IP packet 302, relative to a qualitative services IP packet 300. While the technology will be described with respect to its use based on an Internet protocol (IP) packet structure, it should be understood that other transport formats may be modified in accordance with the teachings herein.
[0041] A qualitative services IP packet 300 includes a Qualitative Service (QS) header or QS header and a data payload. The QS header includes an indication that the packet supports qualitative services. The QS header is used to identify the payload structure (the“data” in FIG. 3) as comprising logical chunks. In one embodiment, this comprises using a QS bit in the QS header. Each chunk of data (Data Chunk 1, Data Chunk 2, Data Chunk 3... Data Chunk N) is identified in the QS header. In one embodiment, positions of the data chunks are identified by offsets (in the priority level (PL) offsets field) specified in the QS header.
[0042] The QS header may include a checksum or cyclic redundancy check (CRC) for different chunks so that the integrity of the packet can still be verified even after QS packet drop operation. In this technology, packet-level checks may be disabled and replaced by chunk- level checks.
[0043] As described below, each chunk has a significance-factor associated with it. In the embodiment of FIG. 3, three priority levels are shown: High, Medium, and Low. While three priority levels are illustrated, it should be recognized that any number of priority levels may be utilized in accordance with the technology. The significance factor is assigned by the transmitting application on a sender network enabled device. The QS header indicates the relative data priority or significance assigned by the data transmitting application in the PL offsets field. For example, the QS header can indicate different significance or priority levels of each chunk in the payload, and how to identify the payload chunks associated with these priority levels (i.e. by using an offset). In one embodiment, to identify the chunks of data, the QS header specifies a specific offset for each chunk. Alternatively, it may refer to a known vector of offsets. While using offsets is one manner in which the chunks in the data payload may be identified, any suitable means of communicating the location and size of data chunks in the data of packet 300 are suitable for use in accordance with the present technology.
[0044] The significance information may be associated with each chunk and may be used by the QS routing engines 155 in each of the network nodes 106-1 through 106 -3 to manage traffic. In one embodiment, (termed“packet wash” herein) lower significance chunks may be dropped by a network node in the case of a network condition, such as congestion, being detected. In one embodiment, each QS routing engine 155 may drop lower priority chunks in low priority packets first (until low priority packets are entirely dropped), then lower priority chunks in medium priority packets, etc., depending on the priority of the chuck and that of the packet, as described below. While three priority levels are shown, the number of priority levels may be based on the network and application. For instance, in a data center, it is sometimes beneficial to cut the payload and only forward the header. This is due to the use of shallow buffers in order to speed up communications. In networks where buffers would fill up more slowly, more priority levels can be supported.
[0045] As discussed below with respect to an implementation of the header using BPP protocol in FIG. 9, the QS header can define the significance in terms of a significance function that assigns the different significance or priority to each chunk of data. This could be explicit (as in the High/Medium/Low shown in FIG. 3, where the significance is embedded in one of three levels) or implicit.
FIG. 4 illustrates a method performed by the application (i.e. application 114) in conjunction with, for example, a QS engine 125, to ready data for transmission via a network environment 100. At 410, an application on a network enabled device (108, 110, 112) outputs a data stream organized initially as IP packets and identifies within the stream, a plurality of chunks, each including a subset of the data in each packet, and an indication of the location and the significance or priority of chunks of data in the data stream. Each IP packet may have a priority set by, for example, a type of service (TOS) precedence field or Differentiated Services Code Point (DSCP) field, as defined by the application. At 420, each QS engine 125 calculates the number of chunks which can be forwarded per packet based on the data provided by the application and communicates the packet organization to the application. Alternatively, data in each packet can be pre-marked by an application prior to chunk calculation. At 430, chunks are assigned to each packet by, for example, the QS engine 125. At 440, the significance or priority information for the chunks assigned to each packet is encoded into the QS header of each packet and a packet priority is assigned per the IP protocol used to transmit the packets.
FIGs. 5 and 6 illustrate one form of qualitative service comprising a“packet wash” service which addresses a network issue such as network congestion. Packet wash is a packet scrubbing operation that reduces the size of a packet while retaining as much information as possible. It operates by dropping lower-priority chunks from the data payload according to the information carried in the QS header, helping the network node acting as a data forwarder to understand the significance of (or the relationship between) the chunks. Packet wash ensures that packets are less likely to be dropped entirely by instead dropping portions (or chunks) of the packets, and chunks having a higher priority are less likely to be dropped than those with lower or normal priorities. The dropped chunks might not be recovered, but some chunk information which remains may still be usable at the receiver device. Using the information encoded into each QS header, under adverse network conditions such as resource congestion, lower priority (or less significant) chunks may be dropped.
Each node 106 selectively drops parts of the packet payload to reduce packet size and alleviate congestion while forwarding the remainder of the packet to its destination. Each network node 106 understands the significance or relationship of the chunks and accordingly makes decisions to drop chunk(s) based on the current situation, such as congestion level, the priority carried in the packet, etc. Chunks with higher significance are less likely to be dropped when qualitative services are applied in the network. As an example, for video streaming, the sender could rearrange the bits in the payload such that the first consecutive chunks contain the base layer, while the next chunks contain the enhancement layers. Thus, in case of congestion, a forwarding node can intentionally remove as many of the chunks containing enhancement layers as necessary. In one embodiment, to enable the packet wash technique described herein, the meta-data in the header, denoted as qualitative entropy (Q-entropy), is used to alter the payload. In this embodiment, a packet QS header of FIG. 3 may specify: (1) a function through which network nodes treat a packet; (2) a chunk-dependent significance parameter understood by this function; (3) the threshold beyond which a packet cannot be further degraded; and (4) the network condition when it is to be treated. This collectively defines q-entropy. A washed packet results from the application of q-entropy, i.e., for any packet p, if washed packet is p and Qf is the Q- entropy, then p ' = Qf (p). The operation Qf (.) can be applied until the washed packet reaches the threshold at which it cannot be further degraded, p’ = limx Q (p), where x represents the number of operations at successive forwarding nodes. The function is applied when a condition (such as network congestion) is met, and the degradation threshold T has not been reached yet. If the threshold T has been reached, the payload cannot be further reduced because it will be rendered unusable.
FIG. 5 is a flowchart illustrating a process which may be performed at any of network nodes 106-1, 106 - 2, 106 - 3 by the QS routing engine 155. At 510, a packet including, for example, a QS header and data chunks is received at a network node. Typically, more than one packet will be received at each node and buffered in an input buffer. The input buffer may be a dedicated hardware buffer or a dedicated portion of memory 222 of FIG. 2. At 515, an initial determination is made as to whether there is an issue with the network. In one embodiment, application of a packet wash to a data stream is conditioned upon an (adverse) issue or condition existing in the network. Such issue may comprise a limitation on bandwidth in the network or network congestion which impedes the network’s ability to achieve maximum throughput. If there is no issue with the network at 515, once a check that all chunks have been received has passed at 540, then optionally at 545, an acknowledgment of the packet may be returned to the sender (to maintain operational equivalence with existing TCP/IP standards), and at 550 the packet is passed on to its next routing hop. Alternatively, no acknowledgment that the packet was received need be provided, and in yet another alternative, such an acknowledgment need only be sent when the packet is not received, as described below at 590.
If there is an issue with the network at 515, a check is made at 520 to determine whether the packet is a QS packet. If not, at 522, the packet is processed in accordance with another packet transport protocol used in the network since no information is available in the packet to perform the packet wash. If there is an issue with the network at 515, and the packet is a QS packet 520, then at 525 a determination may be made as to whether to drop at least one (and possibly several) chunks of the lowest significance available in a lowest priority packet. Any packet which has previously had chunks removed may be at a higher packet level priority than other packets. Similarly, packets marked as having high priority (such as in TOS or DSCP field) may also be at a higher packet level priority than other packets. Step 525 comprises an analysis of the type of adverse issue affecting the network, and a calculation of how much data needs to be removed from the packets to alleviate the issue successfully and need not be limited to a calculation for one packet. At 525, the method first looks to the packet level priority, and then the chunk priority within each packet at that level. For example, if the issue is network congestion, it may be necessary to drop more than one chunk in a packet, or chunks in successive packets, in order to reduce the latency of packet transfer in successive nodes. In one embodiment, as discussed below with respect to FIG. 7A, this calculation may consider multiple packets at the network node, including packets which may be present in an input queue of a network node. Once chunks are dropped at 525, at 530, a determination is made as to whether the remaining chunks of data in the packet are worth forwarding. If not, the packet is dropped at 590. Generally, no ACK is sent at 545 after the drop at 590, and the sender may determine to resend the entire packet. Alternatively, following the TCP protocol standard, an ACK may be sent indicating receipt of the packet. However, if the time for receiving an ACK runs out or a receiver or intermediate node sends a negative acknowledgement (NACK), the sender knows that the packet loss has happened, and may then retransmit the packet. If a router drops a packet, the sender has to wait for either time out or NACK before retransmitting that packet. In the present technology, each intermediate node uses packet wash as a way to avoid packet drops due to congestion. So retransmission and wait times are avoided (unless the packet had to be really dropped) and qualitative services minimize packet retransmissions because intermediate nodes avoid dropping packets due to congestion through QS treatment.
Optionally, after dropping the packet 590, acknowledgment may be sent at 545 indicating to the sender that the congestion exists and the sender may determine to adjust its configuration of payload data, using smaller or fewer chunks in each packet. If the remaining data is worth sending at 530, then at 555, the QS routing engine 125 in the particular node 106 operating on the packet may rewrite the priority at the packet level to, for example, effectively increase the priority of the remaining lower priority chunks (for example, via a higher priority of the entire packet) so that at the next hop of the packet, the packet has a greater chance of passing through the next hop without having additional chunks removed from the packet. At 560, the packet is then forwarded with the remaining chunks and rewritten priority of the packet.
The method of FIG. 5 is repeated at any network node having a QS routing engine 125. As such, at the next hop for a packet which has had chunks removed, the received packet at such hop will certainly carry less information than the original payload from the sender. Ultimately, the packet will reach its destination receiver. The Q-entropy function on a node helps determine the quality of the packet. In the qualitative services environment discussed herein, a packet sent may not always be same as the packet received. Therefore, it is important to determine if the received packet is usable (if received packet was treated qualitatively) and this determination is performed by calculating a q-entropy function. The operation of the Q-entropy function itself may vary from application to application. For example, one operation (method) may be“equal-weight-trimming” in which all the chunks are of equal value but are treated qualitatively by dropping chunks from the end of the payload. For example, if 2 of 5 chunks with equal priority are dropped in transit, its quality is 0.6 (1 - 0.2*2) and the Q-entropy function has a threshold value that indicates that packet is usable as long as value is greater than or equal to 0.5. In a second example, if chunk priority based trimming is used, then the chunks with lower priority are dropped. In this example if priority is defined in terms of a chunk weight, and a chunk weighted priority for lowest priority chunks dropped was 0.1 and remaining higher priority chunks were 0.2, 0.2 and 0.3 (respectively), then the quality factor would be 0.8 computed as (0.2 + 0.2 +0.3). If the Q-entropy threshold is 0.7 or higher, then the packet is usable. Thus the Q-entropy function has a threshold and uses parameters in the QS header to compute the quality. In addition, by aggregating this over the entire flow, the overall quality of the flow is appropriately determined so that the network operators can adjust the resources, or the receiver can send feedback to the sender.
In this calculation, that priority of chunk does not change. Hence, with qualitative services there are two levels of priorities, 1) the traditional packet level (IP -type of service or Differentiated Services Code Point (DSCP) service) and 2) priority per chunk. The priorities at chunk level are never changed during packet transport in these embodiments. However, when an intermediate node drops a chunk, it may decide to increase the packet-level priority to provide fairness in scheduling. Similarly, in other embodiments chunk priority could be changed. For example, in some embodiments chunk priority can be adjusted instead of packet priority, and chunk priority among multiple packets can optionally be used to determine which chunks should be dropped.
FIG. 6 graphically illustrates the effect of the process of FIG. 5. Illustrated in FIG. 6 is the transmission of a packet N through different network nodes 106-1, 106-2, 106-3. When packet N arrives at network node 106 - 1 (at 610a), it arrives with 12 chunks of data 625. A first subset of chunks 615 in Packet N may be classified as high-priority. A second subset of chunks 620 is classified as medium priority data. A third subset of chunks 635 is classified as low priority data. As noted above, the classification of the chunks of data in the packet is performed by the applications 114 on a network device.
As packet N flows from network node 106-1 to network node 106-2, in the example of FIG. 6, a chunk of the least significant data 625 is removed by router 106-1 due to a network congestion issue or some other network issue. Hence, only 3 of the 4 original lower priority chunks 635’ arrive at node 106 - 2. At node 106-2, when packet N arrives, the Packet N which was previously classified“best effort” at node 106-1 has been reclassified as medium priority. Note only one chunk has been removed by node 106-1. Although in this example only one chunk is removed, multiple chunks, and chunks in different priorities of chunks (for example, all the lowest priority and some of the medium priority chunks), may be removed at node 106- 1. If node 106 - 2 detects the condition (for example, network congestion) and decides to remove chunks to address the adverse condition, then node 106-2 may remove one or more additional chunks from Packet N. In the example of FIG. 6, it is assumed that there are no additional packets (subsequently sent packets N+l, N+2, etc.) enqueued at node 106-2 having lower priority chunks which would be removed before any additional medium priority chunks are removed from packet N.
In the example shown in FIG. 6, one additional chunk is removed from packet N at node 106- 2 resulting in two remaining chunks in lower priority chunk block 635’ arriving at node 106- 3. Note that the decision to remove additional chunks may be dependent upon whether or not there is value in removing such chunks and forwarding on the remaining chunks in the packet as described at step 530, and as detailed below with respect to FIG. 7A. When packet N arrives at router 106-3, Packet N has been reclassified as high-priority. To the extent that packet N is routed further, it will have a higher overall priority against additional chunks being removed. It should be recognized that while the example of FIG. 6 illustrates one chunk being removed at each node, any number of different chunks can be removed from a packet and as it traverses through the network system.
Thus, the qualitative packet wash technology performs selective trimming of a payload from less to higher significant chunks. Accordingly, each network forwarding node decides on which packet to trim and for this packet, which chunk(s) to trim. Until the network conditions improve, the receivers receive lesser quality streams. This may become undesirable over a period of time. In order to ease the network load and reduce congestion, the receiver can check the modified header of a washed packet and trigger an adaptive congestion-control by notifying the sender about the level of congestion in the network. As noted with respect to FIG. 5, it can send an acknowledgment with a quality of packet value, i.e. an indication of a number of chunks that were dropped which the sender uses to alter its transmission rate in order to avoid any further drops. Once the drops in the network have been prevented a stable rate is achieved. The sender can then attempt or choose to gradually increase the rate by adding chunks and determine whether the network can deliver at this rate without any loss. Thus, based on the qualitative feedback from the receiver, the sender adapts to data rates in the network. This is different from the traditional transport mechanisms where a sender waits for packet loss and requires a re-transmission of an entire packet. In the present technology, the sender need not wait for determination of entire packet loss, nor does it require notification to retransmit all of the data. Thus, adaptive rate control utilizes network resources more effectively. Moreover, it significantly reduces data delivery delay by partially delivering the packets as well as dynamically managing packet sizes, which is critically important in emerging real-time applications.
This use of qualitative services improves network efficiency and fairness among users. In particular, if a packet that has been qualitatively treated already (for example, has had chunks removed already) is in contention for buffer space with a packet that has not been trimmed yet (all other priorities being equal), the forwarding network node should trim the intact packet as in the case of nodes 106-1 and 106-2 in the example shown in FIG. 6. Moreover, since a packet that has had its payload reduced is more congestion-friendly, the forwarding node may give it a higher priority (such as that which occurs in the nodes 106-2 and 106-3 of FIG. 6).
In other embodiments, the packet wash chunk trimming operation need not be to only tail- (or packet end) drops— a chunk may be removed from anywhere within the payload since this provides higher flexibility for applications to categorize significance. Tail drops of chunks allow the forwarding network nodes to lower overhead because then the amount of buffer shift is minimized. Therefore, each trimming approach chosen by applications (114) should consider the trade-off between performance and flexibility. Nevertheless, in some embodiments the chunks can be arranged in order of priority, such that tail drops follow the desired priority.
FIG. 7 is a flowchart illustrating one embodiment of step 525 which determines whether to remove chunks from a given packet. At 705, a determination will be made as to the total number of chunks which need to be dropped in order to alleviate the adverse network condition. The total number of chunks may be a calculation which is derived from a total amount of bytes over a number of packets which need to be removed. Each network node may have enqueued a number of additional packets which are waiting to be processed. Each of these packets may likewise have a priority based on having had chunks removed from the packets, or not having chunks removed from the packets. At 710, for each enqueued packet, a determination of the priority of the packet (relative to other packets enqueued) is made, and for each lowest priority packet, the significance of the remaining chunks is made. At 720, the method determines whether there are other packets in the queue having lower packet level priority with lower priority chunks. If so, then at 730 the total number of chunks from the lowest chunks of the lowest priority packets in the input cues are removed. If not, then the lowest priority packets in the current packet are selected for removal at 725.
After either of steps 725 or 730, step 530 can be performed though a function, termed herein as a Q-entropy function. Generally, step 530 defines a quality threshold beyond which a packet cannot be further degraded. The determination at 530 may vary from application to application. Step 530 determines whether, for chunks dropped in transit, the remaining quality of chunks in a data is above a threshold value that indicates that packet is usable. In order to implement the foregoing features of qualitative services, a packet format to support qualitative services is described with respect to FIGs. 8 and 9. A packet framework for qualitative services (and in particular packet washing) using a format referred to herein and in literature as Big Packet Protocol (BPP) is presented. The packet framework is generally illustrated with respect to FIG. 8, with a particular implementation of the command and meta data blocks illustrated with respect to FIG. 9.
BPP is a programmable data plane technology compatible with IP networks. In BPP, one may attach meta-information or directives using BPP blocks into packets. This meta-information provides guidance to intermediate routers about processing those packets. The BPP block, shown in FIGs. 8 and 9, allows per-packet behavior for functionality such as in-band per- packet signaling facilities, per-flow path selections, and network-level operator decisions. BPP is useful for implementing qualitative services and in particular the packet washing technology described herein. A qualitative packet can be represented by a BPP contract consisting of packet wash directive which will have significance-factors as its meta-data. By doing so, network nodes remain unaware of the user-payload, and wash packets only as prescribed by the application(s) 114.
FIG. 8 illustrates a general structure of an ethernet encapsulated, BPP packet. The packet 800 is encapsulated within an ethernet frame 802, and the ethernet frame 802 includes an indication 804 that the ethernet type is a BPP -type, protocol packet, including BPP header 810 and data payload 850. The BPP header 810 is subdivided into an IP header (or pseudo header) and a BPP block which contains a BPP header, command block and metadata block.
Additional details on the BPP block (command block and metadata block) are illustrated in FIG. 9. The BPP Block format of FIG. 9 contains the following parameters: (1) a command “PacketWash” enabling each network node to perform the packet wash process.; (2) a condition (i.e. congestion) when packet wash is applied; (3) Qf— a Q-entropy function that defines an operation on the payload; e.g., a priority, binary, or step function; (4) Qthreshoid— a threshold value beyond which the chunks cannot be further dropped; and (4) information about each chunk CHi, including: (a) SIGi— a significance-factor associated with the chunk as per the function; e.g., priority order, or binary 0 or 1 bit; (b) Offi - an offset to describe the location of the chunk in the payload; (c) CRCi— a CRC to verify the integrity of the chunk; and (d) OFi - - a flag to determine if the chunk was dropped (which helps data receivers know which chunks have been dropped in the network).
The design of the Packet Wash declarative dictates the behavior of the packet with respect to quality and takes the following into consideration:
• conditional-directive: Not all BPP-commands are conditional, but packet wash can be a conditional directive to be applied after determining that the network state is adverse, and it is likely that the packet will not reach the receiver.
• q-entropy function: Each packet carries this function through which the network nodes understand how to operate on the payload based on the significance-factors associated with chunks.
• resource-resolution: When packets from two or more distinct flows contend for the same resource, with all else being equal, qualitatively treated packets (those having had chunks removed) could be given a higher priority.
• latency-constraint: If it is determined that a qualitative packet will arrive late at the destination even after qualitative treatment or at the cost of processing, then it can be dropped.
FIG. 10 illustrates the operation of the packet wash operation using a BPP packet. The process of FIG. 10 is performed on the network node. At 1010, as the qualitative packet arrives, at 1012 a determination is made as to whether the patent under analysis is one of the lowest packet priority in the input queue. If not, at 1015, the process of FIG. 10 is run on another, lower packet priority packet at 1015. At 1020 the QS routing engine 155 extracts packet wash command and checks for the condition. A condition such as congestion can be determined by checking, for example, if an egress queue of a network node 106 is above a threshold, such as being equal to or greater than 90% full. If at 1020, the condition is not true, the packet is treated in accordance with the packet transport protocol in use at 1030. Generally, at 1030, the packet is forwarded. If at 1020, the condition is true, then at 1040, a determination is made as to whether not the packet which has arrived is in fact a QS packet. If not, the packet is treated in accordance with the packet transport protocol at 1030 and in such case, the packet may be dropped. If, at 1040, the packet is a QS packet, at 1060 the QS routing engine 155 applies the function in q-entropy to parameters of each chunk in the payload. For example, if a function is binary, parameters have value 0 or 1. The output gives the chunk offset(s) to be dropped by the QS routing engine 155, dropping a number of bytes from the offsets of resulting (to-be dropped) chunks and marks them dropped in the header (OF field). At 1070, a determination is made as to whether the degraded packet exceeds the qualitative threshold. If yes, then the packet is dropped at 1050. If the degraded packet does not exceed the qualitative threshold, then at 1080, a determination is made as to whether the packet will still arrive late at its destination. If not, the QS routing engine forwards it to the next network node.
The qualitative service functions in the present technology ensure that packets marked with higher priority are scheduled earlier than lower or normal priorities. Consequently, under adverse network conditions such as resource congestion, the lower priority packets or chunks may be dropped. In current schemes, by contrast, re-transmission of packets can waste network resources, reduce the overall throughput, and cause both longer and unpredictable delay in packet delivery. Not only does the re-transmitted packet have to travel part of the routing path twice, but the sender would not realize the packet has been dropped until timeout or negative- acknowledgement happens, which also adds to the extended waiting time at the sender side before the re-transmission is initiated. The current approach of handling the packet error or network congestion, which discards the packet entirely, is not effective.
[0046] FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system. The general-purpose network component or computer system 1100 includes a processor 1102 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 1104, and memory, such as ROM 1106 and RAM 1108, input/output (I/O) devices 1110, and a network 1112, such as the Internet or any other well-known type of network, that may include network connectively devices, such as a network interface. Although illustrated as a single processor, the processor 1102 is not so limited and may comprise multiple processors. The processor 1102 may be implemented as one or more CPU chips, cores (e.g., a multi-core processor), FPGAs, ASICs, and/or DSPs, and/or may be part of one or more ASICs. The processor 1102 may be configured to implement any of the schemes described herein. The processor 1102 may be implemented using hardware, software, or both.
[0047] The secondary storage 1104 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if the RAM 1108 is not large enough to hold all working data. The secondary storage 1104 may be used to store programs that are loaded into the RAM 1108 when such programs are selected for execution. The ROM 1106 is used to store instructions and perhaps data that are read during program execution. The ROM 1106 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of the secondary storage 1104. The RAM 1108 is used to store volatile data and perhaps to store instructions. Access to both the ROM 1106 and the RAM 1108 is typically faster than to the secondary storage 1104. At least one of the secondary storage 1104 or RAM 1108 may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein.
[0048] It is understood that by programming and/or loading executable instructions onto the node 1 100, at least one of the processor 1120 or the memory 1122 are changed, transforming the node 1100 in part into a particular machine or apparatus, e.g., a router, having the novel functionality taught by the present disclosure. Similarly, it is understood that by programming and/or loading executable instructions onto the node 1100, at least one of the processor 1102, the ROM 1106, and the RAM 1108 are changed, transforming the node 1100 in part into a particular machine or apparatus, e.g., a router, having the novel functionality taught by the present disclosure. It is fundamental to the electrical engineering and software engineering arts that functionality that can be implemented by loading executable software into a computer can be converted to a hardware implementation by well-known design rules. Decisions between implementing a concept in software versus hardware typically hinge on considerations of stability of the design and numbers of units to be produced rather than any issues involved in translating from the software domain to the hardware domain. Generally, a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design. Generally, a design that is stable that will be produced in large volume may be preferred to be implemented in hardware, for example in an ASIC, because for large production runs the hardware implementation may be less expensive than the software implementation. Often a design may be developed and tested in a software form and later transformed, by well-known design rules, to an equivalent hardware implementation in an application specific integrated circuit that hardwires the instructions of the software. In the same manner as a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions may be viewed as a particular machine or apparatus.
[0049] The qualitative service as a native feature of the networks has the following benefits: (a) a packet re-transmission may not be needed if the receiver has the capability to comprehend what is left in the packet after removal of certain chunks from the payload by the intermediate network nodes, and the receiver can recover as much information as needed. In this case, the receiver can acknowledge the acceptance of the packet, while it may also indicate to the sender that it was partially dropped in the network. Network resource usage can be tremendously reduced and better prioritized for the delivery of other packets; and (b) the latency of packet delivery can be significantly reduced due to the absence of re-transmissions. Some of the information contained in the original packet can be recovered by the receiving node, as long as some recovery algorithms or methods are agreed and known in advance by the sender, the forwarding nodes, and the receiver. The algorithms and methods can be carried along with the packet, such that it can be detected and executed by the intermediate network nodes, and revealed to the receiver, which can carry out the reverse operation to recover some or all the information contained in the packet.
[0050] In a further aspect, the technology includes a means of controlling data flows in a network. The means for controlling includes means (106) for receiving a data packet (300) having a qualitative service header and a data payload, the header defining a plurality of chunks (CH) of data in the payload, each chunk being a sub-set of the payload, the header identifying a relationship between each of the plurality of chunks in the payload. The means for controlling includes means for determining (155) that an adverse network condition impedes data flows in the network and for altering one or more of the plurality of chunks in the data packet to address the adverse network condition based on the relationship identified in the header. The means for determining further includes means for dropping one or more chunks from the data packet based on the relationship, wherein the one or more chunks dropped is a minimum number of chunks needed to address the adverse network condition.
[0051] The technology described herein can be implemented using hardware, firmware, software, or a combination of these. The software or firmware used can be stored on one or more processor readable storage devices to program one or more of the blocks of FIG. 2 to perform the functions described herein. The processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and non removable media. By way of example, and not limitation, computer readable media may comprise computer readable storage media and communication media. Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information, and which can be accessed by the components described above. A computer readable medium or media does (do) not include propagated, modulated or transitory signals.
[0052] Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term“modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
[0053] In alternative embodiments, some or all of the software or firmware can be replaced by dedicated hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc. In one embodiment, software (stored on a storage device) implementing one or more embodiments is used to program one or more processors. The one or more processors can be in communication with one or more computer readable media/ storage devices, peripherals and/or communication interfaces.
[0054] It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details.
[0055] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0056] The description of the present disclosure has been presented for purposes of illustration and description but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.
[0057] For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.
[0058] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims

CLAIMS What is claimed is:
1. A method of controlling data flows in a network, comprising:
receiving a data packet having a qualitative service header and a data payload, the header defining a plurality of chunks of data in the payload, each chunk being a sub-set of the payload, the header identifying a relationship between each of the plurality of chunks in the payload;
determining that an adverse network condition impedes data flows in the network; and altering one or more of the plurality of chunks in the data packet to address the adverse network condition based on the relationship identified in the header.
2. The method of claim 1 wherein the relationship comprises an assigned significance to each chunk relative to other chunks in the payload.
3. The method of any of claims 1 through 2 wherein the relationship is a priority or other measure of significance between the chunks.
4. The method of any of claims 1 through 3 wherein the altering comprises dropping one or more chunks from the data packet based on the relationship.
5. The method of claim 4 wherein the relationship is a priority and the method further comprising increasing a priority of any packet in which one or more chunks have been dropped.
6. The method of any of claims 1 through 4 wherein the method comprises reading a header in the data packet, the header including control information and metadata, the control information selectively enabling the determining and altering steps.
7. The method of any of claim 6 wherein the control information comprises a definition of the adverse network condition and a command to perform the dropping the one or more chunks based on the relationship identified in the header and the definition of the adverse network condition being true.
8. The method of any of claims 6 through 7 wherein the header comprises:
a command to implement the determining and altering;
a condition defining when to implement the command;
a function that defines a relationship operation on the payload;
a threshold value beyond which the chunks cannot be further dropped.
9. The method of any of claims 6 through 8 wherein the header comprises:
a significance factor associated with the chunk as per the function;
an indicator of a location of the chunk in the payload;
a CRC for each chunk configure to verify an integrity of the chunk; and
a flag to determine if the chunk was dropped.
10. The method of claim 8 wherein the function is a Q-entropy function that when calculated determines a quality of a packet based on the number of chunks received.
11. The method of any of claims 1 through 10 wherein the data packet comprises a header comprising control and metadata, and the method comprises updating the header in the received packet to indicate that chunks are being dropped from the packet.
12. The method of any of claims 1 through 11 wherein the method further comprises:
notifying a sender of the plurality of chunks of data in the payload that chunks are being dropped from the payload, and in response to the notifying, receiving any chunks which were not previously received.
13. A network node apparatus, comprising:
a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to:
receive a plurality of data packets, each data packet having a qualitative service header and a data payload, each header defining a plurality of chunks of data in the payload, each chunk being a sub-set of the payload, the header identifying a relationship between each of the plurality of chunks in each data payload;
determine that an adverse network condition impedes data flows in the network; and drop one or more of the plurality of chunks in the data packet to address the adverse network condition based on the relationship identified in the header.
14. The network node apparatus of claim 13 wherein the relationship comprises assigning a data significance to each chunk relative to other chunks in the payload including one of:
a priority between the chunks; or
an entropy function indicating the quality of the data in each packet.
15. The network node apparatus of any of claims 13 - 14 wherein the data packet comprises a header including control information and metadata, the control information selectively enabling the one or more processors to determine and drop.
16. The network node apparatus of claim 15 wherein the header comprises:
a command configured to instruct the processor to implement the determine and drop; a condition configured to enable when the processor should implement the command; a function that defines a relationship operation for the processor on the data payload; and
a threshold value beyond which the chunks cannot be further dropped.
17. The a network node apparatus of any of claims 15 - 16 wherein the header comprises:
a significance factor associated with the chunk per the function; an indicator of a location of the chunk in the payload;
a CRC for each chunk configured to allow the processor verify an integrity of the chunk; and
a flag configured to allow the processor to determine if the chunk was dropped.
18. The network node apparatus of claim 17 wherein the function is a Q-entropy function that when calculated determines a determine a quality of a packet based on a number of chunks received.
19. The network node network node apparatus of clams 13 - 18 wherein the relationship comprises an assigned significance to each chunk relative to other chunks in the payload.
20. The network node network node apparatus of clams 13 - 19 wherein the relationship comprises a priority or other measure of significance between the chunks.
21. The network node network node apparatus of clams 13 - 20 wherein the altering comprises dropping one or more chunks from the data packet based on the relationship.
22. The network node apparatus of clams 13 - 20, wherein the relationship is a priority and the method further comprising increasing a priority of any packet in which one or more chunks have been dropped.
23. The network node apparatus of clams 13 - 20, wherein the one or more processors execute instructions to perform the step of reading a header in the data packet, the header including control information and metadata, the control information selectively enabling the determining and altering steps.
24. The network node apparatus of claim 23 wherein the control information comprises a definition of the adverse network condition and a command to perform the dropping the one or more chunks based on the relationship identified in the header and the definition of the adverse network condition being true.
25. The network node apparatus of claim 24 wherein the function is a Q-entropy function that when calculated determines a quality of a packet based on the number of chunks received.
26. The network node apparatus of any of claims 13-25 wherein the data packet comprises a header comprising control and metadata, and the method comprises updating the header in the received packet to indicate that chunks are being dropped from the packet.
27. The network node apparatus of any of claims 13 - 26 wherein the method further comprises:
notifying a sender of the plurality of chunks of data in the payload that chunks are being dropped from the payload, and in response to the notifying, receiving any chunks which were not previously received.
28. A non-transitory computer-readable medium storing computer instructions for controlling data in a network, that when executed by one or more processors, cause the one or more processors to perform the steps of:
receiving a plurality of data packets at in intermediate network node, the data packets including application data output by the application, the plurality of data packets having a qualitative service header and a data payload, the qualitative service header defining a plurality of chunks of data in the data payload of each packet, each chunk being a sub-set of the data in the data payload, the header identifying a relationship between each of the plurality of chunks in the payload and indicating whether chunks in the payload contain all the application data output by the application;
determining that an adverse network condition impedes data flows in the network; and altering one or more of the plurality of chunks in one of more of the plurality of packets to address the adverse network condition based on at least one of the relationship between each of the plurality of chunks in the payload and the header indicating that chunks in the payload contain less than all the application data output by the application.
29. The non-transitory computer-readable medium of claim 28 wherein the relationship comprises assigning a significance to each chunk relative to other chunks in the payload including one of:
a priority between the chunks; or
an entropy function indicating the quality of the data in each packet.
30. The non-transitory computer-readable medium of any of claims 28 - 29 wherein each data packet comprises a header and wherein the header comprises:
a command configured to instruct the processor to implement the determine and drop; a condition configured to enable when the processor should implement the command; a function that defines a relationship operation for the processor on the data payload; and
a threshold value beyond which the chunks cannot be further dropped.
31. The non-transitory computer-readable medium of any of claims 28 - 30 wherein the data packet comprises a header and wherein the header comprises:
a significance factor associated with the chunk per the function;
an indicator of a location of the chunk in the payload;
a CRC configured to allow the processor verify an integrity of the chunk; and a flag configured to allow the processor to determine if the chunk was dropped.
32. The non-transitory computer-readable medium of any of claims 28 - 31 wherein the relationship comprises an assigned significance to each chunk relative to other chunks in the payload.
33. The non-transitory computer-readable medium of any of claims 28 - 32 wherein the relationship is a priority or other measure of significance between the chunks.
34. The non-transitory computer-readable medium of any of claims 28 - 33 wherein the altering comprises dropping one or more chunks from the data packet based on the relationship.
35. The non-transitory computer-readable medium of any of claims 28 - 34 wherein the computer instructions cause the one or more processors to perform the step of increasing a priority of any packet in which one or more chunks have been dropped.
36. The non-transitory computer-readable medium of any of claims 28 - 35 wherein the computer instructions cause the one or more processors to perform the step of reading a header in the data packet, the header including control information and metadata, the control information selectively enabling the determining and altering steps.
37. The non-transitory computer-readable medium of any of claims 28 - 36 wherein the control information comprises a definition of the adverse network condition and a command to perform the dropping the one or more chunks based on the relationship identified in the header and the definition of the adverse network condition being true.
38. The non-transitory computer-readable medium of any of claims 28 - 37 wherein the function is a Q-entropy function that when calculated determines a quality of a packet based on the number of chunks received.
39. The non-transitory computer-readable medium of any of claims 28 - 37 the data packet comprises a header comprising control and metadata, and the method comprises updating the header in the received packet to indicate that chunks are being dropped from the packet.
40. The non-transitory computer-readable medium of any of claims 28 - 39 wherein the computer instructions cause the one or more processors to perform the step of notifying a sender of the plurality of chunks of data in the payload that chunks are being dropped from the payload, and in response to the notifying, receiving any chunks which were not previously received.
PCT/US2020/027876 2019-04-12 2020-04-13 Chunk based network qualitative services WO2020210780A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201962833014P 2019-04-12 2019-04-12
US201962833129P 2019-04-12 2019-04-12
US62/833,129 2019-04-12
US62/833,014 2019-04-12
US201962834730P 2019-04-16 2019-04-16
US62/834,730 2019-04-16

Publications (1)

Publication Number Publication Date
WO2020210780A1 true WO2020210780A1 (en) 2020-10-15

Family

ID=70482869

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2020/027872 WO2020210779A2 (en) 2019-04-12 2020-04-12 Coded data chunks for network qualitative services
PCT/US2020/027876 WO2020210780A1 (en) 2019-04-12 2020-04-13 Chunk based network qualitative services

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US2020/027872 WO2020210779A2 (en) 2019-04-12 2020-04-12 Coded data chunks for network qualitative services

Country Status (1)

Country Link
WO (2) WO2020210779A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230088536A1 (en) * 2020-05-30 2023-03-23 Huawei Technologies Co., Ltd. Network contracts in communication packets
WO2023163802A1 (en) * 2022-02-25 2023-08-31 Futurewei Technologies, Inc. Media aware rtf packet dropping

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115190080A (en) * 2021-04-02 2022-10-14 维沃移动通信有限公司 Congestion control method and device and communication equipment
CN116074891A (en) * 2021-10-29 2023-05-05 华为技术有限公司 Communication method and related device
CN116708175B (en) * 2023-08-01 2023-10-20 深圳市联合信息技术有限公司 Operation and maintenance optimization scheduling method for remote information system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000057606A1 (en) * 1999-03-23 2000-09-28 Telefonaktiebolaget Lm Ericsson (Publ) Discarding traffic in ip networks to optimize the quality of speech signals

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8693501B2 (en) * 2010-11-23 2014-04-08 The Chinese University Of Hong Kong Subset coding for communication systems
CN103988458B (en) * 2011-12-09 2017-11-17 华为技术有限公司 The method of coding network message in network based on content center network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000057606A1 (en) * 1999-03-23 2000-09-28 Telefonaktiebolaget Lm Ericsson (Publ) Discarding traffic in ip networks to optimize the quality of speech signals

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MARK HANDLEY ET AL: "Re-architecting datacenter networks and stacks for low latency and high performance", PROCEEDINGS OF THE CONFERENCE OF THE ACM SPECIAL INTEREST GROUP ON DATA COMMUNICATION , SIGCOMM '17, ACM PRESS, NEW YORK, NEW YORK, USA, 7 August 2017 (2017-08-07), pages 29 - 42, XP058370888, ISBN: 978-1-4503-4653-5, DOI: 10.1145/3098822.3098825 *
RICHARD LI ET AL: "A Framework for Qualitative Communications Using Big Packet Protocol", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 25 June 2019 (2019-06-25), XP081379662 *
ZAHEER AMER ET AL: "Smart trimming of video from edge, for fine-grained adaptive multicast", 2013 IEEE 9TH INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES (ICET), IEEE, 9 December 2013 (2013-12-09), pages 1 - 6, XP032569588, DOI: 10.1109/ICET.2013.6743504 *
ZAHEER AMER ET AL: "Smart video packet trimming technique over congested networks", 2015 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC), IEEE, 28 October 2015 (2015-10-28), pages 285 - 290, XP032829849, DOI: 10.1109/ICTC.2015.7354549 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230088536A1 (en) * 2020-05-30 2023-03-23 Huawei Technologies Co., Ltd. Network contracts in communication packets
WO2023163802A1 (en) * 2022-02-25 2023-08-31 Futurewei Technologies, Inc. Media aware rtf packet dropping

Also Published As

Publication number Publication date
WO2020210779A3 (en) 2020-11-19
WO2020210779A2 (en) 2020-10-15

Similar Documents

Publication Publication Date Title
WO2020210780A1 (en) Chunk based network qualitative services
US11902150B2 (en) Systems and methods for adaptive routing in the presence of persistent flows
US9391907B2 (en) Packet aggregation
US7724750B2 (en) Expedited data transmission in packet based network
US8169909B2 (en) Optimization of a transfer layer protocol connection
US10708819B2 (en) Back-pressure control in a telecommunications network
US10785677B2 (en) Congestion control in a telecommunications network
WO2017097201A1 (en) Data transmission method, transmission device and receiving device
US20230163875A1 (en) Method and apparatus for packet wash in networks
EP4022858B1 (en) Systems and methods for wireless communication
US9473274B2 (en) Methods and systems for transmitting data through an aggregated connection
WO2020163124A1 (en) In-packet network coding
CA3061005C (en) Single-stream aggregation protocol
US10299167B2 (en) System and method for managing data transfer between two different data stream protocols
WO2021101640A1 (en) Method and apparatus of packet wash for in-time packet delivery
WO2023241649A1 (en) Method and apparatus for managing a packet received at a switch
CN116032421A (en) Ethernet link control device and storage medium
EP3488571A1 (en) Resource efficient forwarding of guaranteed and non-guaranteed data packets

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20724267

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20724267

Country of ref document: EP

Kind code of ref document: A1