WO2020229905A1 - Multi-timescale packet marker - Google Patents

Multi-timescale packet marker Download PDF

Info

Publication number
WO2020229905A1
WO2020229905A1 PCT/IB2020/053758 IB2020053758W WO2020229905A1 WO 2020229905 A1 WO2020229905 A1 WO 2020229905A1 IB 2020053758 W IB2020053758 W IB 2020053758W WO 2020229905 A1 WO2020229905 A1 WO 2020229905A1
Authority
WO
WIPO (PCT)
Prior art keywords
bitrate
packet
tvf
valid
measured
Prior art date
Application number
PCT/IB2020/053758
Other languages
French (fr)
Inventor
Szilveszter NÁDAS
Ferenc FEJES
Gergő GOMBOS
Sándor LAKI
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to US17/606,579 priority Critical patent/US11695703B2/en
Publication of WO2020229905A1 publication Critical patent/WO2020229905A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0888Throughput
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/31Flow control; Congestion control by tagging of packets, e.g. using discard eligibility [DE] bits

Definitions

  • the present disclosure relates generally to bandwidth management in communications networks, and more particularly, to systems and methods for resource sharing using per packet marking over multiple timescales.
  • a“bottleneck” exists that can negatively impact the resource sharing. In these cases, the“bottleneck” must be managed efficiently.
  • a“bottleneck” is a location in the network where a single or limited number of components or resources affects the capacity or performance of the network. One way to handle such bottlenecks is by entering a“bottleneck mode” and marking packets.
  • edge nodes assign a label to each packet indicating the“importance value” of each packet. For example, the packets communicated in response to an emergency call might have a higher priority and be considered more important than packets exchanged during a session where the user merely surfs the Internet.
  • the assigned importance values are used in determining how to share the bandwidth.
  • Packets are usually associated with“packet flows,” or simply“flows.”
  • the importance values that are assigned to the packets of one flow can be different from the importance values assigned to packets in other flows.
  • the importance values of packets within the same flow can be different. In times of congestion, for example, this allows a network entity to drop packets having the lowest importance first.
  • the length of a time interval is referred to as a“timescale.”
  • a“timescale” For so-called“bursty” traffic, the bandwidth that is measured on multiple timescales (e.g. Round Trip Time (RTT), 1 s, session duration, etc.) usually results in different values.
  • RTT Round Trip Time
  • Some methods of resource sharing control are based on bandwidth measured either on a short timescale (e.g. RTT) or on a very long timescale (e.g. in the form of a monthly volume cap).
  • RTT short timescale
  • very long timescale e.g. in the form of a monthly volume cap.
  • the need for fairness on different time scales is illustrated by the example of short bursty flows and long flows sometimes referred to, respectively, as“mice and elephants.” Contrary to the fairness methods, in which silent periods are included, the performance of a given session is generally described by the bandwidth experienced during the whole session.
  • token buckets More particularly, these methods implement multi-timescale profiling by assigning a plurality of token buckets - each of which represents a different timescale - to a same Drop Precedence level. In these methods, packets are marked in accordance with the given drop precedence level of all related buckets containing a predefined number of tokens.
  • Embodiments of the present disclosure configure multiple timescales (TSs), as well as a Throughput-Value Function (TVF) for each TS. More particularly, embodiments of the present disclosure efficiently measure the bitrates of incoming packets on all TSs. Then, starting from the longest TS and moving towards the shortest, embodiments of the present disclosure determine a distance between the TVFs of different TSs at the measured bitrates.
  • TSs timescales
  • TVF Throughput-Value Function
  • embodiments of the present disclosure also provide a method for marking packets.
  • the present embodiments select a random throughput value between 0 and the bitrate measured on the shortest TS.
  • embodiments of the present disclosure select a TVF, as well as the distances to add to the random value, to determine the packet marking.
  • embodiments of the present disclosure re-use the existing per-packet value (PPV) core stateless schedulers in the core of the network, and provide an optimized implementation where bitrate measurement on longer timescales is not updated for each packet arrival.
  • PSV per-packet value
  • a method of managing shared resources using per-packet marking comprises assigning a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs). Each TVF is assigned to a respective TS, and each TS is associated with one or more valid bitrate regions.
  • the method also calls for determining a plurality of measured bitrates based on the plurality of TSs, determining a random bitrate, and determining one or more distances between the plurality of TVFs. Each distance defines an invalid bitrate region between two TVFs.
  • the method then calls for selecting a TVF based on the random bitrate and the one or more distances between the plurality of TVFs, determining a packet value with which to mark a received packet as a function of the selected TVF, and marking the received packet with the packet value. So marked, the method also calls for outputting the packet marked with the packet value.
  • each TVF relates a plurality of packet values to bitrate throughput, and both the packet values and the bitrate throughput are on a logarithmic scale.
  • the method further comprises updating the plurality of measured bitrates based on the plurality of TSs.
  • the random bitrate is a randomly selected throughput value between 0 and a bitrate measured on a shortest TS.
  • the method further comprises selecting a valid bitrate region from the one or more valid bitrate regions.
  • the selected TVF is associated with the selected valid bitrate region.
  • the method further comprises quantizing the TVFs into a token bucket matrix.
  • each TVF is quantized into one or more token buckets with each token bucket corresponding to a different maximum number of tokens.
  • selecting a TVF based on the random bitrate and the one or more distances between the plurality of TVFs further comprises selecting the TVF based on a measured bitrate.
  • selecting the TVF based on the measured bitrate comprises selecting a first TVF if the random bitrate is less than the measured bitrate, and selecting a second TVF if the random bitrate is larger than the measured bitrate.
  • the measured bitrates in a first valid bitrate region is less than the measured bitrates in a second valid bitrate region.
  • the method further comprises updating the one or more distances responsive to determining that the measured bitrates comprising the one or more valid bitrate regions has changed.
  • the method further comprises not updating any valid or invalid bitrate region that is associated with excessively long TSs for each packet arrival.
  • the method further comprises updating the valid or invalid bitrate region responsive to determining that a predefined time period has elapsed. In one embodiment, the method further comprises updating the valid or invalid bitrate region responsive to determining that a predefined number of bits has been received since a last update.
  • the method further comprises updating all of the valid and invalid bitrate regions offline.
  • the resource being managed is bandwidth.
  • the present disclosure also provides a network node configured to manage resources using per-packet marking.
  • the network node comprises communications circuitry and processing circuitry operatively connected to the communications circuitry.
  • the communications circuitry configured to send data packets to, and receive data packets from, one or more other nodes via a communications network.
  • the processing circuitry is configured to assign a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs), with each TVF being assigned to a respective TS, and wherein each TS is associated with one or more valid bitrate regions, determine a plurality of measured bitrates based on the plurality of TSs, and determine a random bitrate.
  • TVFs throughput-value function
  • TSs timescales
  • the processing circuitry is also configured to determine one or more distances between the plurality of TVFs, wherein each distance defines an invalid bitrate region between two TVFs, select a TVF based on the random bitrate and the one or more distances between the plurality of TVFs, and determine a packet value with which to mark a received data packet as a function of the selected TVF. With the packet value determined, the processing circuitry is further configured to mark the received data packet with the packet value, and output the data packet marked with the packet value via the
  • each TVF relates a plurality of packet values to bitrate throughput, and wherein both the packet values and the bitrate throughput are on a logarithmic scale.
  • the processing circuitry is further configured to update the plurality of measured bitrates based on the plurality of TSs.
  • the random bitrate is a randomly selected throughput value between 0 and a bitrate measured on a shortest TS.
  • the processing circuitry is further configured to select a valid bitrate region from the one or more valid bitrate regions.
  • the selected TVF is associated with the selected valid bitrate region.
  • the processing circuitry is further configured to quantize the TVFs into a token bucket matrix, with each TVF being quantized into one or more token buckets, and with each token bucket corresponding to a different maximum number of tokens.
  • the processing circuitry is further configured to select the TVF based on a measured bitrate. In one embodiment, to select the TVF based on the measured bitrate, the processing circuitry is further configured to select a first TVF if the random bitrate is less than the measured bitrate, and select a second TVF if the random bitrate is larger than the measured bitrate.
  • the measured bitrates in a first valid bitrate region is less than the measured bitrates in a second valid bitrate region.
  • the processing circuitry is further configured to update the one or more distances responsive to determining that the measured bitrates comprising the one or more valid bitrate regions has changed.
  • the processing circuitry is further configured to not update any valid or invalid bitrate region that is associated with excessively long TSs for each packet arrival.
  • the processing circuitry is further configured to update the valid or invalid bitrate region responsive to determining that a predefined time period has elapsed.
  • the processing circuitry is further configured to update the valid or invalid bitrate region responsive to determining that a predefined number of bits has been received since a last update.
  • the processing circuitry is further configured to update all of the valid and invalid bitrate regions offline.
  • the resource being managed is bandwidth.
  • the present disclosure provides a non-transitory computer readable medium storing a control application.
  • the control application comprises instructions that, when executed by processing circuitry of a network node configured to manage resources using per- packet marking, causes the network node to assign a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs).
  • TVFs throughput-value function
  • TSs timescales
  • Each TVF is assigned to a respective TS, and each TS is associated with one or more valid bitrate regions.
  • the instructions when executed by the processing circuitry, also cause the network node to determine a plurality of measured bitrates based on the plurality of TSs, determine a random bitrate, determine one or more distances between the plurality of TVFs, wherein each distance defines an invalid bitrate region between two TVFs, select a TVF based on the random bitrate and the one or more distances between the plurality of TVFs, determine a packet value with which to mark a received data packet as a function of the selected TVF, mark the received data packet with the packet value, and output the data packet marked with the packet value via the communications circuitry.
  • the present disclosure provides a system for managing resources using per-packet marking.
  • the system comprises a network node configured to assign a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs). Each TVF is assigned to a respective TS, and each TS is associated with one or more valid bitrate regions.
  • the network node is also configured to determine a plurality of measured bitrates based on the plurality of TSs, and determine a random bitrate.
  • the network node is also configured to determine one or more distances between the plurality of TVFs, wherein each distance defines an invalid bitrate region between two TVFs, select a TVF based on the random bitrate and the one or more distances between the plurality of TVFs, and determine a packet value with which to mark a received data packet as a function of the selected TVF. With the packet value determined, the network node is configured to mark the received data packet with the packet value, and output the data packet marked with the packet value via the
  • Figure 1 illustrates a graph illustrating multiple TVFs assigned to a corresponding number of TSs.
  • Figure 2 is a graph illustrating a quantized TVF and a token bucket model.
  • Figure 3 is a graph illustrating the determination of a packet value for a multiple TVF profiler.
  • Figure 4 is a flow diagram illustrating a method for computing As according to one embodiment of the present disclosure.
  • Figure 5 is a flow diagram illustrating a method for computing a packet value according to one embodiment of the present disclosure.
  • Figure 6 is a functional block diagram illustrating functions implemented by a packet marking node to mark packets according to one embodiment of the present disclosure.
  • Figures 7A-7B are flow diagrams illustrating a method for managing shared resources using per-packet marking according to one embodiment of the present disclosure according to one embodiment of the present disclosure.
  • Figure 8 is a schematic diagram illustrating a packet marker node configured according to an embodiment of the present disclosure.
  • Figure 9 illustrates a computer program product comprising code executed by the processing circuitry of a packet marker node to mark incoming packets according to one embodiment of the present disclosure.
  • Packets are marked in accordance with the assigned drop precedence level of all related buckets containing a predefined number of tokens.
  • Quantizing a TVF to utilize a plurality of token buckets is also not helpful. Particularly, an unrealistic number of token buckets will be required as the number of drop precedence levels increases (e.g., to more than 10). This makes packet marking inefficient, both in memory usage and in computational demand.
  • Embodiments of the present disclosure address these challenges by configuring a TVF for each of a plurality of TSs. More particularly, embodiments of the present disclosure efficiently measure the bitrates of incoming packets on all TSs. Each TVF is then graphed to indicate the throughput-packet value relationship for that TVF. Then, starting from the longest TS and moving towards the shortest TS, a distance is determined between the TVFs of different TSs at the measured bitrates. To determine the packet marking, a random throughput value between 0 and the bitrate measured on the shortest TS is selected. Then, depending on how the random throughput value relates to the measured bitrates, a TVF and the distances to add to the random throughput value are selected to determine the packet marking.
  • embodiments of the present disclosure re-use the existing PPV core stateless schedulers in the core of the network, and provide an optimized implementation where bitrate measurement on longer timescales is not updated for each packet arrival.
  • embodiments of the present disclosure provide benefits and advantages that current methods of per-packet marking based bandwidth sharing control are not able to provide. For example, not only do the embodiments described herein implement multi-timescale fairness, but they also provide a flexible way to control that multi-timescale fairness. Additionally, the embodiments described herein implement a fine-grained control of both traffic mix and resource bandwidth that is independent of other resource sharing control. Moreover, unlike prior art methods that define algorithms for a single buffer, implementing embodiments of the present disclosure requires no changes to the core of the network. This allows for fast implementation, while also minimizing any additional memory and computational requirements placed on the core of the network.
  • FIG. 1 is a graph 10 illustrating an example for four TSs.
  • the embodiments of the present disclosure define one TVF per TS rather than a single TVF for all TSs.
  • Both the x-axis (i.e., labeled“THROUGHPUT”) and the y-axis (i.e., labeled“PACKET VALUE”) are on logarithmic scale, and the TVFs are numbered from the shortest TS to the longest TS with TVF 4 indicating the longest TS and TVF ! indicating the shortest TS.
  • One of the goals of the present disclosure is to achieve higher bitrates for smaller bursts - i.e., TVF,(x) > TVF i + 1 (x) for all i and x.
  • a given TVF can be quantized into token buckets.
  • the TVFs in Figure 1 could be converted to a token bucket matrix.
  • a token bucket model such as the one illustrated in Figure 2, for example, can be used to develop an efficient implementation.
  • Figure 2 is a graph 20 illustrating an example of a simple quantized TVF.
  • Each TVF in Figure 2 - i.e., TVF 1 ;
  • TVF 2 - is quantized to two token buckets BSii, BS 2I and BS I2 , BS 22, respectively. This results in a 2x2 token bucket matrix.
  • a packet can be marked to Packet Value (PV) PV ! if both token buckets with bitrates R 12 and Rn contain at least a predetermined number tokens.
  • PV Packet Value
  • BSn will first be emptied before BS I2 .
  • bitrate R 2 on TS 2 i.e., the timescale associated with TVF 2
  • BS 22 has not yet been emptied:
  • TVF 2 will be representative when marking packets until bitrate R 2 is reached.
  • the region between R 2 and R 2 +A 2 cannot be used. This means that if packet marking is performed:
  • Figure 3 is a graph illustrating the above concepts using generic rate measurement algorithms and using 4 TSs and 4 TVFs.
  • both the x-axis (i.e.,“THROUGHPUT”) and the y-axis (“PACKET VALUE”) in Figure 3 are logarithmic.
  • each TVF - TVF 1 ; TVF 2 , TVF 3 , TVF 4 is associated with a different timescale.
  • TVF T belongs to the shortest TS and TVF 4 belongs to the longest TS.
  • the result of the rate measurements on the different timescales are denoted in Figure 3 as Ri, R 2 , R 3 , and R 4 .
  • Figure 3 is a graph 30 illustrating the distances D between the TVFs. Particularly, D 3 denotes the distance between TVF 4 and TVF 3 at the intersection of TVF 4 and R 4 . Note that D 3 does depend on the value of R 4 , because of the logarithmic scales. Region“2” is the region where D 3 is and it is not used when determining a PV.
  • D 2 and D ⁇ can be similarly determined, with the only difference being that the intersection of TVF 3 and R 3 + D 3 determines D 2 , and the intersection of TVF 2 and R 2 +A 3 +A 2 determines A ⁇ As seen in Figure 3, all As (i.e., D-i, D 2 , and D 3 ) have a corresponding forbidden region (i.e.,“invalid” regions 2, 4, 6).
  • a PV for an incoming packet can be determined by:
  • the areas marked L L 4 in Figure 3 indicate the corresponding“valid” regions of TVFs that the present embodiments actually use to determine the packet value (PV).
  • FIG. 4 is a flow diagram illustrating a method 40 for determining D, for a general case according to one embodiment of the present disclosure.
  • method 40 begins with initializing the variables used in calculating D, (box 42). Once initialization is complete, the value of i is checked (box 44). As long as i is greater than 0, method 40 computes D, (box 46). Particularly:
  • the A,s have to be updated only when the R,s change.
  • embodiments of the present disclosure do not update the R,s that are associated with excessively long TSs for each packet arrival. Instead, these R,s are updated only if TS/10 period have elapsed, or when R * TS/10 bits have arrived since the last update.
  • R TS/10 period
  • R * TS/10 bits have arrived since the last update.
  • i can be initialized at the index of the longest updated TS (i.e. the“j” in TS j. ⁇ ,). This optimizes the packet marker. In particular, as most timescales are likely to be above 1 second, the updates are likely to be infrequent.
  • embodiments of the present disclosure are configured to update of all R,s in an offline control logic, rather than in the packet pipeline.
  • Figure 5 is a flow diagram illustrating a method 50 for determining a packet value based on the relation of the random rate r and the RiS. As seen in Figure 5, and using Figure 3 as a further visual aid, method 50 selects a desired region, an appropriate right D offset, and a desired TVR.
  • method 50 begins by initializing some variables used in the computation (box 52). This initialization includes initializing the random rate r. Once the variables have been initialized, the random rate r value is checked to ensure that its value does not exceed that of R, (box 54). If not, the value of i is decremented (box 56) and the value of r re-checked (box 54). These steps continue in a loop until the value of r is determined to equal or exceed the value of R,. Once the value of r equals or exceeds the value of R, the PV is computed as
  • Figure 6 illustrates the functions implemented at a packet marker device 60 according to an embodiment of the present disclosure.
  • the TVFs and the TSs are first configured (box 62).
  • the packet marker device 60 determines the R,s and the A,s (box 66).
  • packet marker device 60 determines the R,s and the A,s as previously described using method 40 seen in Figure 4.
  • the packet marker device 60 computes the packet value for the incoming packet (box 68), as previously described in method 50 of Figure 5, for example, and marks the packet with the determined packet value (box 70). The marked packet is then output by the packet marker device 60.
  • the present embodiments utilize bitrate measurement on different timescales.
  • another embodiment of the present disclosure utilizes a sliding window based measurement. In these latter embodiments, the arrival of a packet during the last TS seconds is divided by the value of the TS.
  • the result of the disclosed configuration on the embodiment seen in Figure 1 will be that TVF 4 determines the resource sharing of long flows. For shorter flows, the TVF corresponding to the TS closest to flow activity will represent resource sharing compared to other flows including long flows.
  • Figures 7A-7B are flow diagrams illustrating a method 80 for managing shared resources using per-packet marking according to one embodiment of the present disclosure.
  • Method 80 is implemented by a packet marking device and, as seen in Figure 7A, comprises assigning a plurality of TVFs to a plurality of TSs. Each TVF is assigned to a respective TS, and each TS is associated with one or more valid bitrate regions (box 82).
  • Method 80 next calls for the packet marking device determining a plurality of measured bitrates based on the plurality of TSs (box 84), as well as a random bitrate (box 86).
  • Method 80 then calls for the packet marking device determining one or more distances between the plurality of TVFs.
  • each distance defines an invalid bitrate region between two TVFs (box 88).
  • Method 80 then calls for the packet marking device selecting a TVF based on the random bitrate and the one or more distances between the plurality of TVFs (box 90).
  • selecting a TVF based on the random bitrate and the one or more distances between the plurality of TVFs selecting the TVF based on a measured bitrate.
  • Such selection may comprise, in one aspect, selecting a first TVF if the random bitrate is less than the measured bitrate, and selecting a second TVF if the random bitrate is larger than the measured bitrate. So selected, method 80 then calls for determining a packet value with which to mark a received packet as a function of the selected TVF (box 92). So determined, method 80 then calls for the packet marking device to mark the received packet with the packet value (box 94) and output the packet marked with the packet value (box 96).
  • Packet marking device also performs other functions in accordance with method 80.
  • method 80 calls for updating the plurality of measured bitrates based on the plurality of TSs (box 98).
  • Method 80 also calls for the packet marking device to select a valid bitrate region from the one or more valid bitrate regions (box 100).
  • the TVF that was selected previously i.e., box 90 in Figure 7A
  • Method 80 then calls for quantizing the TVFs into a token bucket matrix (box 102). Particularly, in this embodiment, each TVF is quantized into one or more token buckets, with each token bucket corresponding to a different maximum number of tokens.
  • Method 80 also calls for updating the one or more distances responsive to determining that the measured bitrates comprising the one or more valid bitrate regions has changed (box 104), and
  • method 80 calls for not updating any valid or invalid bitrate region that is associated with excessively long TSs for each packet arrival (box 106). However, responsive to determining that a predefined time period has elapsed, or that a predefined number of bits has been received since a last update, a valid or invalid bitrate region is updated (box 108). In one embodiment, method 80 calls for updating all of the valid and invalid bitrate regions offline (box 1 10).
  • FIG 8 is a schematic diagram illustrating an exemplary packet marking device 120 configured according to embodiments of the present disclosure.
  • packet marking device 120 comprises processing circuitry 122, memory circuitry 124 configured to store a control program 126, and communications circuitry 128.
  • the processing circuitry 122 may comprise one or more microprocessors, microcontrollers, hardware circuits, firmware, or a combination thereof, and controls the operation of the packet marking device 120 as herein described.
  • Memory circuitry 124 stores program instructions and data needed by the processing circuitry 122 to implement the present embodiments herein described.
  • Permanent data and program instructions executed by the processing circuitry 122 may be stored in a non-volatile memory, such as a read only memory (ROM), electronically erasable programmable read only memory (EEPROM), flash memory, or other non-volatile memory device.
  • a volatile memory such as a random access memory (RAM) may be provided for storing temporary data.
  • the memory circuitry 124 may comprise one or more discrete memory devices, or may be integrated with the processing circuitry 122.
  • the communications interface circuitry 128 is configured to send messages to, and receive messages from, one or more other nodes in a communications network. Such communications include incoming packets to be marked according to the present embodiments, as well as the outgoing packets that have been marked according to the present embodiments.
  • FIG. 9 illustrates the main functional components of an exemplary processing circuitry 122 for a packet marking device 120.
  • processing circuitry 122 comprises a TVF and TS configuration module/unit 132, an a packet receiving module/unit 134, an R, and D, determination module/unit 136, a PV determination module/unit 138, and a marked packet sending module/unit 140.
  • the TVF and TS configuration module/unit 132, the packet receiving module/unit 134, the R, and D, determination module/unit 136, the PV determination module/unit 138, and the marked packet sending module/unit 140 may, according to the present embodiments, be implemented by one or more microprocessors, microcontrollers, hardware, firmware, or a combination thereof.
  • the TVF and TS configuration module/unit 132 is configured to determine the TVFs and the TSs, and to assign a TVF to each of the plurality of TSs, as previously described.
  • the packet receiving module/unit 134 is configured receive incoming packets that are to be marked according to embodiments of the present disclosure, while the R, and D, determination module/unit 136 is configured to compute the R,s and A,s, and select the desired D
  • the PV determination module/unit 138 is configured to select a desired TVF to compute the PV that will be utilized to mark the incoming packet, as previously described.
  • the marked packet sending module/unit 140 is configured to send the marked packet to a destination node, as previously described.
  • Embodiments further include a carrier containing a computer program, such as control program 126.
  • This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • embodiments herein also include a computer program product (e.g., control program 126) stored on a non-transitory computer readable (storage or recording) medium (e.g., memory circuitry 124) and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus (e.g., a packet marking device 120) to perform as described above.
  • a computer program product may be, for example, control program 126.
  • Embodiments further include a computer program product, such as control program 84, comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device, such as packet marking device 120.
  • This computer program product may be stored on a computer readable recording medium.
  • the packet marker configured according to the present embodiments is per traffic aggregate and no coordination between separate entities is required. Therefore a packet marking device configured in accordance with the present embodiments can be implemented in cloud.
  • a packet marking node configured according to the present embodiments can control resource sharing continuously based on throughput fairness on several timescales.
  • multiple TVFs are configured to represent resource sharing on multiple time-scales.
  • the TVFs are configured based on a relation between the TVFs.
  • the present embodiments measure bitrates on multiple time-scales in the profiler, and determine the D values and the valid regions of TVFs based on that information.
  • the present embodiments also determine where the As are positioned based on the distance between TVFs at selected bitrates, as well as the packet value based on random bitrate.
  • the random bitrate is between zero and the rate measurement on the shortest time-scale.
  • the present embodiments also configure a packet marking node to select the right TVF and the right As to add to the random bitrate r, and further, provide solutions for optimizing rate measurements.

Abstract

A network node (120), such as a packet marking node, efficiently measures the bitrates of incoming packets on a plurality of timescales (TSs). A throughput-value function (TVF) is then graphed to indicate the throughput-packet value relationship for that TVF. Then, starting from the longest TS and moving towards the shortest TS, the packet marking node determines (88) a distance between the TVFs of different TSs at the measured bitrates. To determine the packet marking, the packet marking node selects a random throughput value between 0 and the bitrate measured on the shortest TS. Depending on how the random value relates to the measured bitrates, a TVF, and the distances to add to the random value, is then selected to determine (92) a packet value (PV) with which to mark the packet. The packet marking node then marks (94) the packet according to the determined PV.

Description

MULTI-TIMESCALE PACKET MARKER
RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Application No. 62/847497 filed 14 May 2019, the entire disclosure of which being hereby incorporated by reference herein.
TECHNICAL FIELD
The present disclosure relates generally to bandwidth management in communications networks, and more particularly, to systems and methods for resource sharing using per packet marking over multiple timescales.
BACKGROUND
Communications networks, such as radio communications networks, are ubiquitous in today’s society. In such networks, certain resources, such as bandwidth, is limited and often shared among users or devices. In most cases, the amount of resources, such the available bandwidth, is controlled. Sometimes, though, a“bottleneck” exists that can negatively impact the resource sharing. In these cases, the“bottleneck” must be managed efficiently. As is known in the art, a“bottleneck” is a location in the network where a single or limited number of components or resources affects the capacity or performance of the network. One way to handle such bottlenecks is by entering a“bottleneck mode” and marking packets.
Generally, when operating in a bottleneck mode, current methods of per-packet marking based bandwidth sharing control depend on the importance of the packets. More specifically, in the bottleneck mode, edge nodes assign a label to each packet indicating the“importance value” of each packet. For example, the packets communicated in response to an emergency call might have a higher priority and be considered more important than packets exchanged during a session where the user merely surfs the Internet. The assigned importance values are used in determining how to share the bandwidth.
Packets are usually associated with“packet flows,” or simply“flows.” Generally, the importance values that are assigned to the packets of one flow can be different from the importance values assigned to packets in other flows. Similarly, the importance values of packets within the same flow can be different. In times of congestion, for example, this allows a network entity to drop packets having the lowest importance first.
Methods currently exist to control bandwidth sharing among flows even when per flow queuing is not possible. Two such methods are described, for example, in U.S. Pat. No.
9,948,563 entitled,“Transmitting Node, Receiving Node and methods therein,” and in the paper entitled“Towards a Congestion Control-Independent Core-Stateless AQM,” ANRW Ί 8
Proceedings of the Applied Networking Research Workshop pp. 84-90. Both methods are based on per-packet marking based bandwidth sharing control, and define algorithms for a single buffer that result in a shared delay among flows. “Fairness” can also be considered when marking on a per-packet basis. Generally, “fairness” is interpreted as being the equal, or weighted, throughput of data experienced by one or more entities, such as a node or service endpoint, for example, processing a traffic flow or aggregated traffic flows. Such“throughput” is a measure derived from the total packet transmission during a time interval. The length of a time interval is referred to as a“timescale.” For so-called“bursty” traffic, the bandwidth that is measured on multiple timescales (e.g. Round Trip Time (RTT), 1 s, session duration, etc.) usually results in different values.
Some methods of resource sharing control are based on bandwidth measured either on a short timescale (e.g. RTT) or on a very long timescale (e.g. in the form of a monthly volume cap). The need for fairness on different time scales is illustrated by the example of short bursty flows and long flows sometimes referred to, respectively, as“mice and elephants.” Contrary to the fairness methods, in which silent periods are included, the performance of a given session is generally described by the bandwidth experienced during the whole session.
Other methods of resource sharing control utilize token buckets. More particularly, these methods implement multi-timescale profiling by assigning a plurality of token buckets - each of which represents a different timescale - to a same Drop Precedence level. In these methods, packets are marked in accordance with the given drop precedence level of all related buckets containing a predefined number of tokens.
SUMMARY
Embodiments of the present disclosure configure multiple timescales (TSs), as well as a Throughput-Value Function (TVF) for each TS. More particularly, embodiments of the present disclosure efficiently measure the bitrates of incoming packets on all TSs. Then, starting from the longest TS and moving towards the shortest, embodiments of the present disclosure determine a distance between the TVFs of different TSs at the measured bitrates.
Additionally, embodiments of the present disclosure also provide a method for marking packets. To determine the packet marking, the present embodiments select a random throughput value between 0 and the bitrate measured on the shortest TS. Depending on how the random value relates to the measured bitrates, embodiments of the present disclosure select a TVF, as well as the distances to add to the random value, to determine the packet marking.
Additionally, embodiments of the present disclosure re-use the existing per-packet value (PPV) core stateless schedulers in the core of the network, and provide an optimized implementation where bitrate measurement on longer timescales is not updated for each packet arrival.
In one embodiment, a method of managing shared resources using per-packet marking is provided. In this embodiment, the method comprises assigning a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs). Each TVF is assigned to a respective TS, and each TS is associated with one or more valid bitrate regions. The method also calls for determining a plurality of measured bitrates based on the plurality of TSs, determining a random bitrate, and determining one or more distances between the plurality of TVFs. Each distance defines an invalid bitrate region between two TVFs. The method then calls for selecting a TVF based on the random bitrate and the one or more distances between the plurality of TVFs, determining a packet value with which to mark a received packet as a function of the selected TVF, and marking the received packet with the packet value. So marked, the method also calls for outputting the packet marked with the packet value.
In one embodiment, each TVF relates a plurality of packet values to bitrate throughput, and both the packet values and the bitrate throughput are on a logarithmic scale.
In one embodiment, the method further comprises updating the plurality of measured bitrates based on the plurality of TSs.
In one embodiment, the random bitrate is a randomly selected throughput value between 0 and a bitrate measured on a shortest TS.
In one embodiment, the method further comprises selecting a valid bitrate region from the one or more valid bitrate regions.
In such embodiments, the selected TVF is associated with the selected valid bitrate region.
In one embodiment, the method further comprises quantizing the TVFs into a token bucket matrix. In these embodiments, each TVF is quantized into one or more token buckets with each token bucket corresponding to a different maximum number of tokens.
In such embodiments, selecting a TVF based on the random bitrate and the one or more distances between the plurality of TVFs further comprises selecting the TVF based on a measured bitrate.
In one embodiment, selecting the TVF based on the measured bitrate comprises selecting a first TVF if the random bitrate is less than the measured bitrate, and selecting a second TVF if the random bitrate is larger than the measured bitrate.
In one embodiment, the measured bitrates in a first valid bitrate region is less than the measured bitrates in a second valid bitrate region.
In one embodiment, the method further comprises updating the one or more distances responsive to determining that the measured bitrates comprising the one or more valid bitrate regions has changed.
In one embodiment, the method further comprises not updating any valid or invalid bitrate region that is associated with excessively long TSs for each packet arrival.
In another embodiment, the method further comprises updating the valid or invalid bitrate region responsive to determining that a predefined time period has elapsed. In one embodiment, the method further comprises updating the valid or invalid bitrate region responsive to determining that a predefined number of bits has been received since a last update.
In one embodiment, the method further comprises updating all of the valid and invalid bitrate regions offline.
In one embodiment, the resource being managed is bandwidth.
In one embodiment, the present disclosure also provides a network node configured to manage resources using per-packet marking. In this embodiment, the network node comprises communications circuitry and processing circuitry operatively connected to the communications circuitry. The communications circuitry configured to send data packets to, and receive data packets from, one or more other nodes via a communications network. The processing circuitry is configured to assign a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs), with each TVF being assigned to a respective TS, and wherein each TS is associated with one or more valid bitrate regions, determine a plurality of measured bitrates based on the plurality of TSs, and determine a random bitrate. The processing circuitry is also configured to determine one or more distances between the plurality of TVFs, wherein each distance defines an invalid bitrate region between two TVFs, select a TVF based on the random bitrate and the one or more distances between the plurality of TVFs, and determine a packet value with which to mark a received data packet as a function of the selected TVF. With the packet value determined, the processing circuitry is further configured to mark the received data packet with the packet value, and output the data packet marked with the packet value via the
communications circuitry.
In one embodiment, each TVF relates a plurality of packet values to bitrate throughput, and wherein both the packet values and the bitrate throughput are on a logarithmic scale.
In one embodiment, the processing circuitry is further configured to update the plurality of measured bitrates based on the plurality of TSs.
In one embodiment, the random bitrate is a randomly selected throughput value between 0 and a bitrate measured on a shortest TS.
In one embodiment, the processing circuitry is further configured to select a valid bitrate region from the one or more valid bitrate regions.
In one embodiment, the selected TVF is associated with the selected valid bitrate region.
In one embodiment, the processing circuitry is further configured to quantize the TVFs into a token bucket matrix, with each TVF being quantized into one or more token buckets, and with each token bucket corresponding to a different maximum number of tokens.
In one embodiment, to select a TVF based on the random bitrate and the one or more distances between the plurality of TVFs, the processing circuitry is further configured to select the TVF based on a measured bitrate. In one embodiment, to select the TVF based on the measured bitrate, the processing circuitry is further configured to select a first TVF if the random bitrate is less than the measured bitrate, and select a second TVF if the random bitrate is larger than the measured bitrate.
In one embodiment, the measured bitrates in a first valid bitrate region is less than the measured bitrates in a second valid bitrate region.
In one embodiment, the processing circuitry is further configured to update the one or more distances responsive to determining that the measured bitrates comprising the one or more valid bitrate regions has changed.
In one embodiment, the processing circuitry is further configured to not update any valid or invalid bitrate region that is associated with excessively long TSs for each packet arrival.
In one embodiment, the processing circuitry is further configured to update the valid or invalid bitrate region responsive to determining that a predefined time period has elapsed.
In one embodiment, the processing circuitry is further configured to update the valid or invalid bitrate region responsive to determining that a predefined number of bits has been received since a last update.
In one embodiment, the processing circuitry is further configured to update all of the valid and invalid bitrate regions offline.
In one embodiment, the resource being managed is bandwidth.
In one embodiment, the present disclosure provides a non-transitory computer readable medium storing a control application. The control application comprises instructions that, when executed by processing circuitry of a network node configured to manage resources using per- packet marking, causes the network node to assign a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs). Each TVF is assigned to a respective TS, and each TS is associated with one or more valid bitrate regions. The instructions, when executed by the processing circuitry, also cause the network node to determine a plurality of measured bitrates based on the plurality of TSs, determine a random bitrate, determine one or more distances between the plurality of TVFs, wherein each distance defines an invalid bitrate region between two TVFs, select a TVF based on the random bitrate and the one or more distances between the plurality of TVFs, determine a packet value with which to mark a received data packet as a function of the selected TVF, mark the received data packet with the packet value, and output the data packet marked with the packet value via the communications circuitry.
In one embodiment, the present disclosure provides a system for managing resources using per-packet marking. In this embodiment, the system comprises a network node configured to assign a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs). Each TVF is assigned to a respective TS, and each TS is associated with one or more valid bitrate regions. The network node is also configured to determine a plurality of measured bitrates based on the plurality of TSs, and determine a random bitrate. The network node is also configured to determine one or more distances between the plurality of TVFs, wherein each distance defines an invalid bitrate region between two TVFs, select a TVF based on the random bitrate and the one or more distances between the plurality of TVFs, and determine a packet value with which to mark a received data packet as a function of the selected TVF. With the packet value determined, the network node is configured to mark the received data packet with the packet value, and output the data packet marked with the packet value via the
communications circuitry.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates a graph illustrating multiple TVFs assigned to a corresponding number of TSs.
Figure 2 is a graph illustrating a quantized TVF and a token bucket model.
Figure 3 is a graph illustrating the determination of a packet value for a multiple TVF profiler.
Figure 4 is a flow diagram illustrating a method for computing As according to one embodiment of the present disclosure.
Figure 5 is a flow diagram illustrating a method for computing a packet value according to one embodiment of the present disclosure.
Figure 6 is a functional block diagram illustrating functions implemented by a packet marking node to mark packets according to one embodiment of the present disclosure.
Figures 7A-7B are flow diagrams illustrating a method for managing shared resources using per-packet marking according to one embodiment of the present disclosure according to one embodiment of the present disclosure.
Figure 8 is a schematic diagram illustrating a packet marker node configured according to an embodiment of the present disclosure.
Figure 9 illustrates a computer program product comprising code executed by the processing circuitry of a packet marker node to mark incoming packets according to one embodiment of the present disclosure.
DETAILED DESCRIPTION
Current methods of per-packet marking based bandwidth sharing control depend on the importance of the packets. Some methods define algorithms for a single buffer that result in a shared delay among flows, while other methods are based on bandwidth that is measured either on a short timescale or on a very long timescale. Still other methods utilize a plurality of token buckets. Each bucket represents a different timescale and is assigned to a same Drop
Precedence level. Packets are marked in accordance with the assigned drop precedence level of all related buckets containing a predefined number of tokens.
However, current methods of per-packet marking based bandwidth sharing control methods are problematic. For example, methods that utilize token buckets are only suitable for use with a few drop precedence levels. It is not possible with such methods to achieve the same type of fine-grained control of resource sharing that is possible with other methods.
Quantizing a TVF to utilize a plurality of token buckets is also not helpful. Particularly, an unrealistic number of token buckets will be required as the number of drop precedence levels increases (e.g., to more than 10). This makes packet marking inefficient, both in memory usage and in computational demand.
Embodiments of the present disclosure address these challenges by configuring a TVF for each of a plurality of TSs. More particularly, embodiments of the present disclosure efficiently measure the bitrates of incoming packets on all TSs. Each TVF is then graphed to indicate the throughput-packet value relationship for that TVF. Then, starting from the longest TS and moving towards the shortest TS, a distance is determined between the TVFs of different TSs at the measured bitrates. To determine the packet marking, a random throughput value between 0 and the bitrate measured on the shortest TS is selected. Then, depending on how the random throughput value relates to the measured bitrates, a TVF and the distances to add to the random throughput value are selected to determine the packet marking.
Additionally, embodiments of the present disclosure re-use the existing PPV core stateless schedulers in the core of the network, and provide an optimized implementation where bitrate measurement on longer timescales is not updated for each packet arrival.
As described herein, embodiments of the present disclosure provide benefits and advantages that current methods of per-packet marking based bandwidth sharing control are not able to provide. For example, not only do the embodiments described herein implement multi-timescale fairness, but they also provide a flexible way to control that multi-timescale fairness. Additionally, the embodiments described herein implement a fine-grained control of both traffic mix and resource bandwidth that is independent of other resource sharing control. Moreover, unlike prior art methods that define algorithms for a single buffer, implementing embodiments of the present disclosure requires no changes to the core of the network. This allows for fast implementation, while also minimizing any additional memory and computational requirements placed on the core of the network.
Referring now to the drawings, exemplary embodiments of the present disclosure build on the TVF concept to define resource sharing targets. Figure 1 , in particular, is a graph 10 illustrating an example for four TSs. As seen in Figure 1 , the embodiments of the present disclosure define one TVF per TS rather than a single TVF for all TSs. Both the x-axis (i.e., labeled“THROUGHPUT”) and the y-axis (i.e., labeled“PACKET VALUE”) are on logarithmic scale, and the TVFs are numbered from the shortest TS to the longest TS with TVF4 indicating the longest TS and TVF! indicating the shortest TS. One of the goals of the present disclosure is to achieve higher bitrates for smaller bursts - i.e., TVF,(x) > TVFi + 1(x) for all i and x.
A given TVF can be quantized into token buckets. By way of example, the TVFs in Figure 1 could be converted to a token bucket matrix. However, this conversion would mean that an extremely large number of token buckets would be required for implementation (e.g., 30000 * 4 = 12,000 token buckets). Not only is a large number of token buckets not practical to implement, as it would negatively impact the availability of both memory and computational resources, it is also highly inefficient. However, a token bucket model, such as the one illustrated in Figure 2, for example, can be used to develop an efficient implementation.
In more detail, the embodiment of Figure 2 is a graph 20 illustrating an example of a simple quantized TVF. Each TVF in Figure 2 - i.e., TVF1 ; TVF2 - is quantized to two token buckets BSii, BS2I and BSI2, BS22, respectively. This results in a 2x2 token bucket matrix.
According to the present disclosure, a packet can be marked to Packet Value (PV) PV! if both token buckets with bitrates R12 and Rn contain at least a predetermined number tokens. Thus:
Rii
Figure imgf000010_0001
This is because the TVFs are parallel on a logarithmic scale, and the equation
TVFi(x) > TVFi+1 (x)
holds true for all i and x. At the same time, the maximum token levels for the token buckets (BSy) are different, because of the timescales. Thus, with respect to the number of tokens in each token bucket BS :
BS < BSI2 and BS2I < BS22
Specifically, for a PVi for a given burst, BSn will first be emptied before BSI2. When BSI2 is also emptied, it means that bitrate R2 on TS2 (i.e., the timescale associated with TVF2) has already been reached. Assuming, then, that BS22 has not yet been emptied:
• TVF2 will be representative when marking packets until bitrate R2 is reached; and
• TVFi will be representative when marking packets when the bitrate is above
R2.
However, according to the present embodiments, the region between R2 and R2+A2 cannot be used. This means that if packet marking is performed:
• a packet is marked according to TVF2(r) if r is smaller than R2; and
• a packet is marked according to TVF^r+A^ if r is larger than R2.
Figure 3 is a graph illustrating the above concepts using generic rate measurement algorithms and using 4 TSs and 4 TVFs. As above, both the x-axis (i.e.,“THROUGHPUT”) and the y-axis (“PACKET VALUE”) in Figure 3 are logarithmic.
As seen in Figure 3, each TVF - TVF1 ; TVF2, TVF3, TVF4, is associated with a different timescale. In particular, TVFT belongs to the shortest TS and TVF4 belongs to the longest TS. The result of the rate measurements on the different timescales are denoted in Figure 3 as Ri, R2, R3, and R4.
It should be noted that while this specific example indicates that R1>R2>R3>R4, this need not always be true. This is true, however, at the start of a burst. As seen in more detail later, the concept for the general case (i.e., the order of R,) is similar. The behavior for both TVF4 and TVF3 is similar as described previously with respect to Figure 2.
Figure 3 is a graph 30 illustrating the distances D between the TVFs. Particularly, D3 denotes the distance between TVF4 and TVF3 at the intersection of TVF4 and R4. Note that D3 does depend on the value of R4, because of the logarithmic scales. Region“2” is the region where D3 is and it is not used when determining a PV. Additionally, D2 and DΊ can be similarly determined, with the only difference being that the intersection of TVF3 and R3+ D3 determines D2, and the intersection of TVF2 and R2+A3+A2 determines A^ As seen in Figure 3, all As (i.e., D-i, D2, and D3) have a corresponding forbidden region (i.e.,“invalid” regions 2, 4, 6).
According to the present embodiments, a PV for an incoming packet can be determined by:
• Updating all necessary Rs and As;
• Getting a random bitrate r, uniformly between 0 and R^ and
• Based on the relation of r and the R,s, selecting the right region (i.e., from regions 1 ,3,5,7 in Figure 3) and the right TVF to determine the PV. By way of example only, if R3<r<R2, then TVF2(r + D3 + D2) determines the PV to use for packet marking.
The areas marked L L4 in Figure 3 indicate the corresponding“valid” regions of TVFs that the present embodiments actually use to determine the packet value (PV).
Figure 4 is a flow diagram illustrating a method 40 for determining D, for a general case according to one embodiment of the present disclosure. As seen in Figure 4, method 40 begins with initializing the variables used in calculating D, (box 42). Once initialization is complete, the value of i is checked (box 44). As long as i is greater than 0, method 40 computes D, (box 46). Particularly:
• Ri’=max(Ri+r, R,) is computed to handle the situation when R^R^R^ ^
does not hold;
• D, is calculated
Figure imgf000011_0001
• RAi +1 is the reference value of TVFi+1 to calculate a PV; and
• TVF, ! determines the throughput belonging to that PV on TVF,.
According to the present embodiments, the A,s have to be updated only when the R,s change.
In a further optimization, embodiments of the present disclosure do not update the R,s that are associated with excessively long TSs for each packet arrival. Instead, these R,s are updated only if TS/10 period have elapsed, or when R *TS/10 bits have arrived since the last update. When not all R,s are updated, i can be initialized at the index of the longest updated TS (i.e. the“j” in TSj.·,). This optimizes the packet marker. In particular, as most timescales are likely to be above 1 second, the updates are likely to be infrequent. Additionally, to further optimize the performance of packet marking, embodiments of the present disclosure are configured to update of all R,s in an offline control logic, rather than in the packet pipeline.
Figure 5 is a flow diagram illustrating a method 50 for determining a packet value based on the relation of the random rate r and the RiS. As seen in Figure 5, and using Figure 3 as a further visual aid, method 50 selects a desired region, an appropriate right D offset, and a desired TVR.
In Figure 5, method 50 begins by initializing some variables used in the computation (box 52). This initialization includes initializing the random rate r. Once the variables have been initialized, the random rate r value is checked to ensure that its value does not exceed that of R, (box 54). If not, the value of i is decremented (box 56) and the value of r re-checked (box 54). These steps continue in a loop until the value of r is determined to equal or exceed the value of R,. Once the value of r equals or exceeds the value of R,, the PV is computed as
TVFi(r+Ak+...+Ai) (box 58).
Figure 6 illustrates the functions implemented at a packet marker device 60 according to an embodiment of the present disclosure. As seen in Figure 6, the TVFs and the TSs are first configured (box 62). Then, upon arrival of a packet (box 64), the packet marker device 60 determines the R,s and the A,s (box 66). In at least one embodiment, packet marker device 60 determines the R,s and the A,s as previously described using method 40 seen in Figure 4. Once the RiS and the A,s have been determined, the packet marker device 60 computes the packet value for the incoming packet (box 68), as previously described in method 50 of Figure 5, for example, and marks the packet with the determined packet value (box 70). The marked packet is then output by the packet marker device 60.
It should be noted that the present embodiments utilize bitrate measurement on different timescales. However, another embodiment of the present disclosure utilizes a sliding window based measurement. In these latter embodiments, the arrival of a packet during the last TS seconds is divided by the value of the TS. Further, the result of the disclosed configuration on the embodiment seen in Figure 1 will be that TVF4 determines the resource sharing of long flows. For shorter flows, the TVF corresponding to the TS closest to flow activity will represent resource sharing compared to other flows including long flows.
Figures 7A-7B are flow diagrams illustrating a method 80 for managing shared resources using per-packet marking according to one embodiment of the present disclosure. Method 80 is implemented by a packet marking device and, as seen in Figure 7A, comprises assigning a plurality of TVFs to a plurality of TSs. Each TVF is assigned to a respective TS, and each TS is associated with one or more valid bitrate regions (box 82). Method 80 next calls for the packet marking device determining a plurality of measured bitrates based on the plurality of TSs (box 84), as well as a random bitrate (box 86). Method 80 then calls for the packet marking device determining one or more distances between the plurality of TVFs. In this embodiment, each distance defines an invalid bitrate region between two TVFs (box 88). Method 80 then calls for the packet marking device selecting a TVF based on the random bitrate and the one or more distances between the plurality of TVFs (box 90). In one embodiment, selecting a TVF based on the random bitrate and the one or more distances between the plurality of TVFs selecting the TVF based on a measured bitrate. Such selection may comprise, in one aspect, selecting a first TVF if the random bitrate is less than the measured bitrate, and selecting a second TVF if the random bitrate is larger than the measured bitrate. So selected, method 80 then calls for determining a packet value with which to mark a received packet as a function of the selected TVF (box 92). So determined, method 80 then calls for the packet marking device to mark the received packet with the packet value (box 94) and output the packet marked with the packet value (box 96).
Packet marking device also performs other functions in accordance with method 80. For example, as seen in Figure 7B, method 80 calls for updating the plurality of measured bitrates based on the plurality of TSs (box 98). Method 80 also calls for the packet marking device to select a valid bitrate region from the one or more valid bitrate regions (box 100). The TVF that was selected previously (i.e., box 90 in Figure 7A) is associated with the selected valid bitrate region. Method 80 then calls for quantizing the TVFs into a token bucket matrix (box 102). Particularly, in this embodiment, each TVF is quantized into one or more token buckets, with each token bucket corresponding to a different maximum number of tokens. Method 80 also calls for updating the one or more distances responsive to determining that the measured bitrates comprising the one or more valid bitrate regions has changed (box 104), and
determining whether or not to update a given bitrate region (boxes 106, 108). Particularly, method 80 calls for not updating any valid or invalid bitrate region that is associated with excessively long TSs for each packet arrival (box 106). However, responsive to determining that a predefined time period has elapsed, or that a predefined number of bits has been received since a last update, a valid or invalid bitrate region is updated (box 108). In one embodiment, method 80 calls for updating all of the valid and invalid bitrate regions offline (box 1 10).
Figure 8 is a schematic diagram illustrating an exemplary packet marking device 120 configured according to embodiments of the present disclosure. As seen in Figure 8, packet marking device 120 comprises processing circuitry 122, memory circuitry 124 configured to store a control program 126, and communications circuitry 128. The processing circuitry 122 may comprise one or more microprocessors, microcontrollers, hardware circuits, firmware, or a combination thereof, and controls the operation of the packet marking device 120 as herein described. Memory circuitry 124 stores program instructions and data needed by the processing circuitry 122 to implement the present embodiments herein described. Permanent data and program instructions executed by the processing circuitry 122, such as control program 126, for example, may be stored in a non-volatile memory, such as a read only memory (ROM), electronically erasable programmable read only memory (EEPROM), flash memory, or other non-volatile memory device. A volatile memory, such as a random access memory (RAM), may be provided for storing temporary data. The memory circuitry 124 may comprise one or more discrete memory devices, or may be integrated with the processing circuitry 122. The communications interface circuitry 128 is configured to send messages to, and receive messages from, one or more other nodes in a communications network. Such communications include incoming packets to be marked according to the present embodiments, as well as the outgoing packets that have been marked according to the present embodiments.
Figure 9 illustrates the main functional components of an exemplary processing circuitry 122 for a packet marking device 120. In particular, as seen in Figure 9, processing circuitry 122 comprises a TVF and TS configuration module/unit 132, an a packet receiving module/unit 134, an R, and D, determination module/unit 136, a PV determination module/unit 138, and a marked packet sending module/unit 140. Those of ordinary skill in the art should readily appreciate that the TVF and TS configuration module/unit 132, the packet receiving module/unit 134, the R, and D, determination module/unit 136, the PV determination module/unit 138, and the marked packet sending module/unit 140 may, according to the present embodiments, be implemented by one or more microprocessors, microcontrollers, hardware, firmware, or a combination thereof.
In operation, the TVF and TS configuration module/unit 132 is configured to determine the TVFs and the TSs, and to assign a TVF to each of the plurality of TSs, as previously described. The packet receiving module/unit 134 is configured receive incoming packets that are to be marked according to embodiments of the present disclosure, while the R, and D, determination module/unit 136 is configured to compute the R,s and A,s, and select the desired D|, as previously described. The PV determination module/unit 138 is configured to select a desired TVF to compute the PV that will be utilized to mark the incoming packet, as previously described. The marked packet sending module/unit 140 is configured to send the marked packet to a destination node, as previously described.
Embodiments further include a carrier containing a computer program, such as control program 126. This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
In this regard, embodiments herein also include a computer program product (e.g., control program 126) stored on a non-transitory computer readable (storage or recording) medium (e.g., memory circuitry 124) and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus (e.g., a packet marking device 120) to perform as described above. Such a computer program product may be, for example, control program 126.
Embodiments further include a computer program product, such as control program 84, comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device, such as packet marking device 120. This computer program product may be stored on a computer readable recording medium. Additionally, the packet marker configured according to the present embodiments is per traffic aggregate and no coordination between separate entities is required. Therefore a packet marking device configured in accordance with the present embodiments can be implemented in cloud.
The present disclosure configures a packet marking node to implement function not implemented in prior art devices. For example, a packet marking node configured according to the present embodiments can control resource sharing continuously based on throughput fairness on several timescales. Additionally, multiple TVFs are configured to represent resource sharing on multiple time-scales. The TVFs are configured based on a relation between the TVFs. Additionally, the present embodiments measure bitrates on multiple time-scales in the profiler, and determine the D values and the valid regions of TVFs based on that information. The present embodiments also determine where the As are positioned based on the distance between TVFs at selected bitrates, as well as the packet value based on random bitrate. As previously described, the random bitrate is between zero and the rate measurement on the shortest time-scale. The present embodiments also configure a packet marking node to select the right TVF and the right As to add to the random bitrate r, and further, provide solutions for optimizing rate measurements.

Claims

CLAIMS What is claimed is:
1. A method (80) of managing shared resources using per-packet marking, the method comprising:
assigning (82) a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs), with each TVF being assigned to a respective TS, and wherein each TS is associated with one or more valid bitrate regions;
determining (84) a plurality of measured bitrates based on the plurality of TSs;
determining (86) a random bitrate;
determining (88) one or more distances between the plurality of TVFs, wherein each
distance defines an invalid bitrate region between two TVFs;
selecting (90) a TVF based on the random bitrate and the one or more distances between the plurality of TVFs;
determining (92) a packet value with which to mark a received packet as a function of the selected TVF;
marking (94) the received packet with the packet value; and
outputting (96) the packet marked with the packet value.
2. The method of claim 1 wherein each TVF relates a plurality of packet values to bitrate throughput, and wherein both the packet values and the bitrate throughput are on a logarithmic scale.
3. The method of any of the previous claims further comprising updating (98) the plurality of measured bitrates based on the plurality of TSs.
4. The method of any of the previous claims wherein the random bitrate is a randomly selected throughput value between 0 and a bitrate measured on a shortest TS.
5. The method of any of the previous claims further comprising selecting (100) a valid bitrate region from the one or more valid bitrate regions.
6. The method of claim 5 wherein the selected TVF is associated with the selected valid bitrate region.
7. The method of claim 1 further comprising quantizing (102) the TVFs into a token bucket matrix, with each TVF being quantized into one or more token buckets (BSn, BS12, BS2I , BS22), and with each token bucket corresponding to a different maximum number of tokens.
8. The method of claim 7 wherein selecting a TVF based on the random bitrate and the one or more distances between the plurality of TVFs further comprises selecting the TVF based on a measured bitrate.
9. The method of any of claims 7 and 8 wherein selecting the TVF based on the measured bitrate comprises:
selecting a first TVF if the random bitrate is less than the measured bitrate; and
selecting a second TVF if the random bitrate is larger than the measured bitrate.
10. The method of claim 1 wherein the measured bitrates in a first valid bitrate region is less than the measured bitrates in a second valid bitrate region.
1 1. The method of any of the previous claims further comprising updating (104) the one or more distances responsive to determining that the measured bitrates comprising the one or more valid bitrate regions has changed.
12. The method of any of the previous claims further comprising not updating (106) any valid or invalid bitrate region that is associated with excessively long TSs for each packet arrival.
13. The method of claim 12 further comprising updating (108) the valid or invalid bitrate region responsive to determining that a predefined time period has elapsed.
14. The method of claim 12 further comprising updating (108) the valid or invalid bitrate region responsive to determining that a predefined number of bits has been received since a last update.
15. The method of claim 12 further comprising updating (1 10) all of the valid and invalid bitrate regions offline.
16. The method of any of the preceding claims wherein the resource being managed is bandwidth.
17. A network node (120) configured to manage resources using per-packet marking, the node comprising:
communications circuitry (128) configured to send data packets to, and receive data packets from, one or more other nodes via a communications network; and
processing circuitry (122) operatively connected to the communications circuitry and
configured to:
assign (82) a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs), with each TVF being assigned to a respective TS, and wherein each TS is associated with one or more valid bitrate regions;
determine (84) a plurality of measured bitrates based on the plurality of TSs;
determine (86) a random bitrate;
determine (88) one or more distances between the plurality of TVFs, wherein each
distance defines an invalid bitrate region between two TVFs;
select (90) a TVF based on the random bitrate and the one or more distances between the plurality of TVFs;
determine (92) a packet value with which to mark a received data packet as a function of the selected TVF;
mark (94) the received data packet with the packet value; and
output (96) the data packet marked with the packet value via the communications
circuitry.
18. The network node of claim 17 wherein each TVF relates a plurality of packet values to bitrate throughput, and wherein both the packet values and the bitrate throughput are on a logarithmic scale.
19. The network node of any of claims 17-18 wherein the processing circuitry is further configured to update (98) the plurality of measured bitrates based on the plurality of TSs.
20. The network node of any of claims 17-19 wherein the random bitrate is a randomly selected throughput value between 0 and a bitrate measured on a shortest TS.
21 . The network node of any of claims 17-20 wherein the processing circuitry is further configured to select (100) a valid bitrate region from the one or more valid bitrate regions.
22. The network node of claim 21 wherein the selected TVF is associated with the selected valid bitrate region.
23. The network node of claim 17 wherein the processing circuitry is further configured to quantize (102) the TVFs into a token bucket matrix, with each TVF being quantized into one or more token buckets, and with each token bucket corresponding to a different maximum number of tokens.
24. The network node of claim 23 wherein to select a TVF based on the random bitrate and the one or more distances between the plurality of TVFs, the processing circuitry is further configured to select the TVF based on a measured bitrate.
25. The network node of any of claims 23 and 24 wherein to select the TVF based on the measured bitrate, the processing circuitry is further configured to:
select a first TVF if the random bitrate is less than the measured bitrate; and
select a second TVF if the random bitrate is larger than the measured bitrate.
26. The network node of claim 17 wherein the measured bitrates in a first valid bitrate region is less than the measured bitrates in a second valid bitrate region.
27. The network node of any of claims 17-26 wherein the processing circuitry is further configured to update (104) the one or more distances responsive to determining that the measured bitrates comprising the one or more valid bitrate regions has changed.
28. The network node of any of claims 17-27 wherein the processing circuitry is further configured to not update (106) any valid or invalid bitrate region that is associated with excessively long TSs for each packet arrival.
29. The network node of claim 28 wherein the processing circuitry is further configured to update (108) the valid or invalid bitrate region responsive to determining that a predefined time period has elapsed.
30. The network node of claim 28 wherein the processing circuitry is further configured to update (108) the valid or invalid bitrate region responsive to determining that a predefined number of bits has been received since a last update.
31. The network node of claim 28 wherein the processing circuitry is further configured to update (1 10) all of the valid and invalid bitrate regions offline.
32. The network node of any of claims 17-31 wherein the resource being managed is bandwidth.
33. A non-transitory computer readable medium (124) storing a control application (126) thereon, the control application comprising instructions that, when executed by processing circuitry (122) of a network node (120) configured to manage resources using per-packet marking, causes the network node to:
assign (82) a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs), with each TVF being assigned to a respective TS, and wherein each TS is associated with one or more valid bitrate regions;
determine (84) a plurality of measured bitrates based on the plurality of TSs;
determine (86) a random bitrate;
determine (88) one or more distances between the plurality of TVFs, wherein each distance defines an invalid bitrate region between two TVFs;
select (90) a TVF based on the random bitrate and the one or more distances between the plurality of TVFs;
determine (92) a packet value with which to mark a received data packet as a function of the selected TVF;
mark (94) the received data packet with the packet value; and
output (96) the data packet marked with the packet value via the communications circuitry.
34. A system for managing resources using per-packet marking, the system comprising:
a network node (120) configured to:
assign (82) a plurality of throughput-value function (TVFs) to a plurality of timescales (TSs), with each TVF being assigned to a respective TS, and wherein each TS is associated with one or more valid bitrate regions;
determine (84) a plurality of measured bitrates based on the plurality of TSs;
determine (86) a random bitrate;
determine (88) one or more distances between the plurality of TVFs, wherein each distance defines an invalid bitrate region between two TVFs;
select (90) a TVF based on the random bitrate and the one or more distances
between the plurality of TVFs;
determine (92) a packet value with which to mark a received data packet as a
function of the selected TVF;
mark (94) the received data packet with the packet value; and
output (96) the data packet marked with the packet value via the communications circuitry.
35. A computer program (126), comprising instructions which, when executed on at least one processor (122) of a device (120), cause the at least one processor to carry out the method according to any one of 1 -16.
PCT/IB2020/053758 2019-05-14 2020-04-21 Multi-timescale packet marker WO2020229905A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/606,579 US11695703B2 (en) 2019-05-14 2020-04-21 Multi-timescale packet marker

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962847497P 2019-05-14 2019-05-14
US62/847,497 2019-05-14

Publications (1)

Publication Number Publication Date
WO2020229905A1 true WO2020229905A1 (en) 2020-11-19

Family

ID=70476278

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/053758 WO2020229905A1 (en) 2019-05-14 2020-04-21 Multi-timescale packet marker

Country Status (2)

Country Link
US (1) US11695703B2 (en)
WO (1) WO2020229905A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022153142A1 (en) * 2021-01-14 2022-07-21 Telefonaktiebolaget Lm Ericsson (Publ) Simple hierarchical quality of service (hqos) marking
WO2022238773A1 (en) * 2021-05-10 2022-11-17 Telefonaktiebolaget Lm Ericsson (Publ) Packet processing in a computing device using multiple tables
US11695703B2 (en) 2019-05-14 2023-07-04 Telefonaktiebolaget Lm Ericsson (Publ) Multi-timescale packet marker

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9948563B2 (en) 2013-05-23 2018-04-17 Telefonaktiebolaget Lm Ericsson (Publ) Transmitting node, receiving node and methods therein
WO2018086713A1 (en) * 2016-11-14 2018-05-17 Telefonaktiebolaget Lm Ericsson (Publ) Initial bitrate selection for a video delivery session

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9521177B2 (en) * 2013-09-11 2016-12-13 Cisco Technology, Inc. Network-based adaptive rate limiting
US10015222B2 (en) * 2013-09-26 2018-07-03 Arris Canada, Inc. Systems and methods for selective retrieval of adaptive bitrate streaming media
CA2973101A1 (en) * 2015-01-08 2016-07-14 Arris Enterprises Llc Server-side adaptive bit rate control for dlna http streaming clients
WO2018112657A1 (en) * 2016-12-21 2018-06-28 Dejero Labs Inc. Packet transmission system and method
US11019126B2 (en) * 2017-06-23 2021-05-25 Nokia Solutions And Networks Oy Quality-of-experience for adaptive bitrate streaming
WO2019096370A1 (en) 2017-11-14 2019-05-23 Telefonaktiebolaget Lm Ericsson (Publ) Method and first network node for handling packets
EP3721595B1 (en) * 2017-12-04 2023-04-26 Telefonaktiebolaget LM Ericsson (publ) Automatic provisioning of streaming policies for video streaming control in cdn
WO2019141380A1 (en) 2018-01-22 2019-07-25 Telefonaktiebolaget Lm Ericsson (Publ) Probabilistic packet marking with fast adaptation mechanisms
EP3791668A1 (en) * 2018-05-08 2021-03-17 IDAC Holdings, Inc. Methods for logical channel prioritization and traffic shaping in wireless systems
US10728180B2 (en) * 2018-08-21 2020-07-28 At&T Intellectual Property I, L.P. Apparatus, storage medium and method for adaptive bitrate streaming adaptation of variable bitrate encodings
WO2020074125A1 (en) 2018-10-12 2020-04-16 Telefonaktiebolaget Lm Ericsson (Publ) Design of probabilistic service level agreements (sla)
US11695703B2 (en) 2019-05-14 2023-07-04 Telefonaktiebolaget Lm Ericsson (Publ) Multi-timescale packet marker

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9948563B2 (en) 2013-05-23 2018-04-17 Telefonaktiebolaget Lm Ericsson (Publ) Transmitting node, receiving node and methods therein
WO2018086713A1 (en) * 2016-11-14 2018-05-17 Telefonaktiebolaget Lm Ericsson (Publ) Initial bitrate selection for a video delivery session

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Towards a Congestion Control-Independent Core-Stateless AQM", ANRW '18 PROCEEDINGS OF THE APPLIED NETWORKING RESEARCH WORKSHOP, pages 84 - 90
NADAS SZILVESZTER ET AL: "Per Packet Value: A Practical Concept for Network Resource Sharing", 2016 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), IEEE, 4 December 2016 (2016-12-04), pages 1 - 7, XP033058848, DOI: 10.1109/GLOCOM.2016.7842125 *
PHILLIPA GILL ET AL: "Towards core-stateless fairness on multiple timescales", PROCEEDINGS OF THE APPLIED NETWORKING RESEARCH WORKSHOP, 22 July 2019 (2019-07-22), New York, NY, USA, pages 30 - 36, XP055709624, ISBN: 978-1-4503-6848-3, DOI: 10.1145/3340301.3341124 *
SZILVESZTER NÁDAS ET AL: "Multi timescale bandwidth profile and its application for burst-aware fairness", 19 March 2019 (2019-03-19), XP055709593, Retrieved from the Internet <URL:https://arxiv.org/pdf/1903.08075.pdf> [retrieved on 20200629] *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11695703B2 (en) 2019-05-14 2023-07-04 Telefonaktiebolaget Lm Ericsson (Publ) Multi-timescale packet marker
WO2022153142A1 (en) * 2021-01-14 2022-07-21 Telefonaktiebolaget Lm Ericsson (Publ) Simple hierarchical quality of service (hqos) marking
WO2022238773A1 (en) * 2021-05-10 2022-11-17 Telefonaktiebolaget Lm Ericsson (Publ) Packet processing in a computing device using multiple tables

Also Published As

Publication number Publication date
US20220224652A1 (en) 2022-07-14
US11695703B2 (en) 2023-07-04

Similar Documents

Publication Publication Date Title
US11695703B2 (en) Multi-timescale packet marker
US7310309B1 (en) Dynamic rate limiting adjustment
US9185047B2 (en) Hierarchical profiled scheduling and shaping
US7280477B2 (en) Token-based active queue management
US9391910B2 (en) Smart pause for distributed switch fabric system
US6633585B1 (en) Enhanced flow control in ATM edge switches
US9253045B2 (en) Modeling and simulating flow propagation in dynamic bandwidth systems
US7911958B2 (en) Token bucket with variable token value
Checconi et al. QFQ: Efficient packet scheduling with tight guarantees
US20070121505A1 (en) Providing Proportionally Fair Bandwidth Allocation in Communication Systems
US10419965B1 (en) Distributed meters and statistical meters
EP3744055B1 (en) Probabilistic packet marking with fast adaptation mechanisms
CN110149282B (en) Traffic scheduling method and device
CN109617806B (en) Data traffic scheduling method and device
KR20120055946A (en) Method and apparatus for packet scheduling based on allocating fair bandwidth
US11190430B2 (en) Determining the bandwidth of a communication link
CN112367270B (en) Method and equipment for sending message
US10044632B2 (en) Systems and methods for adaptive credit-based flow
US8660001B2 (en) Method and apparatus for providing per-subscriber-aware-flow QoS
CN114285790A (en) Data processing method and device, electronic equipment and computer readable storage medium
JP2019149043A (en) Estimation device and estimation method
Antonova The study of wireless network resources while transmitting heterogeneous traffic
Wei et al. VirtualLength: A new packet scheduling algorithm for proportional delay differentiation
Iversen The internet erlang formula
EP3716549A1 (en) Bandwidth management

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20722660

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20722660

Country of ref document: EP

Kind code of ref document: A1