WO2022208135A1 - Marker graph for hqos - Google Patents

Marker graph for hqos Download PDF

Info

Publication number
WO2022208135A1
WO2022208135A1 PCT/IB2021/052666 IB2021052666W WO2022208135A1 WO 2022208135 A1 WO2022208135 A1 WO 2022208135A1 IB 2021052666 W IB2021052666 W IB 2021052666W WO 2022208135 A1 WO2022208135 A1 WO 2022208135A1
Authority
WO
WIPO (PCT)
Prior art keywords
rate
packet
node
marker
flows
Prior art date
Application number
PCT/IB2021/052666
Other languages
French (fr)
Inventor
Szilveszter NÁDAS
Sandor LAKI
Gergo GOMBOS
Ferenc FEJES
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IB2021/052666 priority Critical patent/WO2022208135A1/en
Priority to EP21717544.7A priority patent/EP4315793A1/en
Priority to US18/273,887 priority patent/US20240098028A1/en
Publication of WO2022208135A1 publication Critical patent/WO2022208135A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS

Definitions

  • the present disclosure relates generally to resource sharing among a plurality of packet flows and, more particularly, to a packet marking for implementing Hierarchical Quality of Service (HQoS) to control resource sharing and quality of service (QoS) for a plurality of packet flows.
  • HQoS Hierarchical Quality of Service
  • QoS quality of service
  • Network slicing is a solution for sharing resources between operators that can also accommodate the widely varying Quality of Service (QoS) requirements of different users.
  • QoS Quality of Service
  • the general idea underlying network slicing is to separate traffic into multiple logical networks that share the same physical infrastructure. Each logical network is designed to serve a specific purpose and comprises all the network resources required for that specific purpose. Network slices can be implemented for each operator and for each service provided by the operator.
  • the heterogenous traffic mix comprising different flows for different users carried by different network operators and with different QoS requirements poses a challenge for access aggregation networks (AANs).
  • AANs access aggregation networks
  • the network needs to ensure that network resources are shared fairly between different flows while maintaining the required QoS for each flow. Without some form of direct resource sharing control, the result will be unfairness in the treatment of different flows.
  • TCP Transmission Control Protocol
  • RTTs Round Trip Times
  • Static reservation traditionally requires defining in advance the bitrate share of each user’s combined traffic.
  • HQoS Hierarchical Quality of Service
  • Scheduling a technique for resource sharing and QoS management, can implement a richer and more complex set of resource sharing policies.
  • HQoS uses a scheduler and many queues to implement and enforce a resource sharing policy among different traffic aggregates (TAs) and among different flows within a TA.
  • TAs traffic aggregates
  • the HQoS approach organizes managed elements of the network into a hierarchy and applies QoS rules at each level of the hierarchy in order to create more elaborate, refined, and/or sophisticated QoS solutions for shared resource management.
  • HQoS resource sharing can be defined among several TAs at different hierarchical levels, e.g., among operators, network slices, users and subflows of a user.
  • HQoS can also be used to realize statistical multiplexing of a communication link.
  • HQoS is complex and requires configuration at each bottleneck in a network.
  • 5G Fifth Generation
  • optical fiber for the last hop, bottlenecks will become more likely at network routers.
  • the traffic at these routers is heterogenous considering congestion control mechanisms and round trip time (RTT).
  • RTT round trip time
  • the traffic mix is also constantly changing. Controlling resource sharing at these bottlenecks can significantly improve network performance and perceived QoS.
  • Packet marking involves adding information to a packet for potential use by downstream devices and/or processing. For example, an edge router may use packet marking to insert a packet value (PV) into a packet that indicates that packet’s importance in the traffic mix at the edge of the network. The PV may then be used by schedulers in other network nodes along the path traversed by the packet to ensure that the packet is prioritized based on its PV as it traverses the network towards its destination. Packet marking has proven to be a useful technique to enable effective bandwidth sharing control and traffic congestion avoidance within a network.
  • PV packet value
  • HPPV Hierarchical Per Packet Values
  • a related application titled “ HQoS Marking For Many Sublows” discloses a method of packet marking for a HQoS scheme that ensures weighted fairness among a plurality of subflows in a TA. This application is incorporated herein in its entirety by reference. Currently, there is no known method of marking packets in a manner that ensures weighted fairness among different flows at more than two hierarchical layers.
  • the present disclosure relates generally to packet marking for a Hierarchical Quality of Service (HQoS) to control resource sharing among a plurality of packet flows with differentiated services.
  • a packet marker at a single point e.g., gateway
  • the HQoS policy is then realized by using simple PPV schedulers at the bottlenecks in the network.
  • the marker graph includes a source node, a plurality of intermediate nodes corresponding to the SP and WFQ marker components, and a marker node.
  • the intermediate nodes are referred to herein as rate transformation nodes.
  • the source node of the marker graph determines a random bitrate for a packet flow. That random bitrate is routed through the marker graph from the source node through one or more rate transformation nodes to the marker node. The random rate is transformed at each rate transformation node according to the existing WFQ and SP components.
  • the marker node uses the transformed rate received as input to the final marker node to determine the packet value.
  • a first aspect of the disclosure comprises methods of marking packets with a packet value to implement HQoS.
  • the method comprises obtaining an Aggregate Throughput Value Function (ATVF) that maps throughput values to packet values for a plurality of packet flows.
  • ATVF Aggregate Throughput Value Function
  • the method further comprises obtaining a marker graph that encodes resource sharing policies for a Hierarchical Quality of Service (HQoS) hierarchy for the plurality of packet flows as sequences of rate transformations.
  • Each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph from a source node through one or more rate transformation nodes to a marker node.
  • the method further comprises receiving a packet associated with one of said packet flows and marking the packet with a packet value based on a selected path through the marker graph and the ATVF.
  • a second aspect of the disclosure comprises a network node configured to perform packet marking to implement HQoS.
  • the network node is configured to obtain an Aggregate Throughput Value Function (ATVF) that maps throughput values to packet values for a plurality of packet flows.
  • ATVF Aggregate Throughput Value Function
  • the network node is further configured to obtain a marker graph that encodes resource sharing policies for a Hierarchical Quality of Service (HQoS) hierarchy for the plurality of packet flows as sequences of rate transformations.
  • HQoS Hierarchical Quality of Service
  • the network node is further configured to receive a packet associated with one of said packet flows and marking the packet with a packet value based on a selected path through the marker graph and the ATVF.
  • a third aspect of the disclosure comprises a network node configured to perform packet marking to implement HQoS.
  • the network node comprises interface circuitry for receiving and sending packets and processing circuitry.
  • the processing circuitry is configured to obtain an Aggregate Throughput Value Function (ATVF) that maps throughput values to packet values for a plurality of packet flows.
  • the processing circuitry is further configured to obtain a marker graph that encodes resource sharing policies for a Hierarchical Quality of Service (HQoS) hierarchy for the plurality of packet flows as sequences of rate transformations.
  • HQoS Hierarchical Quality of Service
  • Each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph from a source node through one or more rate transformation nodes to a marker node.
  • the processing circuitry is further configured to receive a packet associated with one of said packet flows and marking the packet with a packet value based on a selected path through the marker graph and the ATVF.
  • a fourth aspect of the disclosure comprises a computer program comprising executable instructions that, when executed by a processing circuitry in a network node, causes the network node to perform the method according to the first aspect.
  • a fifth aspect of the disclosure comprises a carrier containing a computer program according to the fourth aspect, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • Figure 1 illustrates an AAN implementing HQoS as herein described.
  • Figure 2 schematically illustrates a HQoS scheduler with multiple hierarchical levels.
  • FIG. 3 illustrates TVFs for four packet flows with different QoS requirements.
  • Figure 4 illustrates an exemplary HQoS scheduler for six packet flows.
  • Figure 5 illustrates a marker graph corresponding to the HQoS scheduler in Figure 4.
  • Figure 6 illustrates exemplary pseudocode for packet marking at a single point to implement HQoS.
  • Figure 7 illustrates exemplary pseudocode for rate transformation by a rate transformation node implementing SP scheduling.
  • Figure 8 illustrates exemplary pseudocode for rate transformation by a rate transformation node implementing WFQ scheduling.
  • Figure 9 illustrates exemplary pseudocode for updating a rate transformation node implementing SP scheduling.
  • Figure 10 illustrates exemplary pseudocode for updating a rate transformation node implementing WFQ scheduling.
  • Figures 11 A - 11 D illustrate one example resource management using a marker graph.
  • Figure 12 illustrates a method of packet marking according to an exemplary embodiment.
  • Figure 13 is an exemplary network node configured to implement packet marking as herein described.
  • Figure 14 is an exemplary computing device configured to implement packet marking as herein described.
  • AAN access aggregation network
  • HQoS access aggregation network
  • packets comprising a multitude of different packet flows from a multitude of different sources 12 and with varying QoS requirements arrive a gateway 20 of the AAN 10.
  • the packets may be any of a variety of different types. Examples of the most common types of packets include Internet Protocol (IP) packets (e.g., Transmission Control Protocol (TCP) packets), Multiprotocol Label Switching (MPLS) packets, and/or Ethernet packets.
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • MPLS Multiprotocol Label Switching
  • Ethernet packets may comprise one or more fields for storing values used by the network 10 in performing packet processing (e.g., deciding whether to forward, queue, or drop packets). These fields may be in either a header or payload section of the packet, as may be appropriate.
  • the gateway 20 classifies the packets and a packet marker 30 assigns a packet value (PV) to each packet according to its relative importance.
  • a packet marker 30 may be located at the gateway 20. Also, there may be multiple packet markers 30 located at different network nodes.
  • the packets are forwarded through a series of switches or routers 40 towards their respective destinations. When congestion is detected, the switches 40 at the bottlenecks implement a congestion control mechanism to drop some packets in order to alleviate the congestion.
  • a scheduler at the switch determines what packets to forward and what packets to drop based on the PV assigned to the packet by the packet marker 30. Generally, packets with a higher PV are less likely to be dropped than packets with a lower PV.
  • HQoS is a technology for implementing complex resource sharing policies in a network through a queue scheduling mechanism.
  • the HQoS scheme ensures that all packet flows will be allocated resources during periods of congestion according to policies established by the operator so that packet flows of relatively low importance are not starved during periods of congestion by packet flows of higher importance.
  • FIG. 2 illustrates a typical implementation of an HQoS scheduler.
  • the HQoS scheduler comprises a hierarchy of physical queues and schedulers configured to share resources among TAs at different levels.
  • one or more schedulers implement resource sharing policies defined by the network operator for TAs of the same level.
  • a level may comprise TAs for different operators, different leased transports, different network slices, different users or different subflows of a user.
  • the scheduler implements policies for sharing resources among different operators.
  • the resources for two operators, denoted Operator A and Operator B can be shared at a 3-to-1 ratio.
  • Each operator may have several leased transports that are assigned shares of the overall resources assigned to that operator.
  • a scheduler at the “leased transport” level implements policies for resource sharing between leased transports.
  • a leased transport may, in turn, be virtualized to carry traffic for a number of network slices.
  • a scheduler at the “network slice” level implements policies for resource sharing resources among different network slices.
  • Each network slice in turn, may serve a plurality of subscribers (or groups thereof). Resource sharing among classes of subscribers of each of the network slices is implemented by a scheduler at the “subscriber” level.
  • a scheduler at the “packet flow” level may implement resource sharing among the different flows of each subscriber in order to allocate different amounts of resource to flows of different types (e.g., to provide more resources to high priority flows relative to low priority flows).
  • a gold class of subscribers may be given a greater share of the shared resources than a silver class of subscribers, and streaming traffic may be given a greater share of the shared resources than web traffic.
  • the scheduler at each level implements a queue scheduling algorithm to manage the resource sharing and schedule packets in its queues.
  • the queue scheduling mechanism provides packets of a certain type with desired QoS characteristics such as the bandwidth, delay, jitter and loss.
  • the queue scheduling mechanism typically works only when the congestion occurs.
  • Commonly used queue scheduling mechanisms include Weighted Fair Queuing (WFQ), Weighted Round Robin (WRR), and Strict Priority (SP).
  • WFQ is used to allocate resources to queues taking part in the scheduling according to the weights of the queues.
  • SP allocates resources based on the absolute priority of the queues.
  • any of the schedulers may apportion the shared resources in any manner that may be deemed appropriate. For example, while the resources were discussed above as being shared between two operators at a ratio of three-to-one, one of the operators may operate two network slices, and the HQoS resource sharing scheme may define resource sharing between those two slices evenly (i.e., at a ratio of one-to-one). Subsequent schedulers may then further apportion the shared resources. For example, the HQoS resource sharing scheme may define resource sharing for both gold and silver subscribers at a ratio of two-to-one, respectively. Further still, the HQoS resource sharing scheme may define resource sharing for web flows and download flows at a ratio of two-to-one.
  • each WFQ may be designed to make a suitable apportionment of the shared resources as may be appropriate at its respective level of the hierarchy.
  • packets are marked with a PV that expresses the relative importance of the packet flow to which the packet belongs and the resource sharing policies between different TAs are determined by the PV assigned to the packets.
  • the assigned PVs are considered by the scheduler at a bottleneck in the network to determine what packets to forward. Generally, packets with higher value are more likely to be forwarded during periods of congestion while packets with lower value are more likely to be delayed or dropped or delayed.
  • the PPV approach uses Throughput Value Functions (TPVs) to determine resource sharing policies among different TAs.
  • TPVs Throughput Value Functions
  • a TVF is used by the packet marker to match a throughput value to a PV.
  • Figure 3 illustrates examples of four TVFs for gold users (G), silver users (S), voice (V) and background (B).
  • G gold users
  • S silver users
  • V voice
  • B background
  • Each TVF defines a relation between PV and throughput. In each TVF, higher throughput maps to lower PVs.
  • the packet marker 30 at the gateway 20 or other network node uses the TVF to apply a PV to each packet and the scheduler at a bottleneck uses the PV to determine what packets to schedule. For each packet, the packet marker selects a uniform random rate r between 0 and the maximum rate for the TA. The packet marker then determines the PV based on the selected rate r and the TVF for the TA. The assignment of a PV based on a uniform random rate ensures that all TAs will be allocated resources during periods of congestion. Packets marked with a gold TVF, for example, will not always receive a higher PV than packets marked with a silver TVF, but will have a greater chance of receiving a higher PV.
  • a core stateless resource sharing mechanism called Hierarchical Per PVs (HPPV) implements HQoS by only modifying packet marking algorithms without any changes to the schedulers at the switches or routers 40. No knowledge of the resource sharing policies is required by the scheduler.
  • HPPV Hierarchical Per PVs
  • HQoS can be implemented with simple PPV schedulers that determine the handling of a packet based only on its PV.
  • An advantage of this approach is that new policies can be introduced by reconfiguring packet marking without making any changes to the scheduler.
  • Another core stateless resource sharing solution is disclosed in a related application titled "HQoS Marking For Many Subflows", which is filed on the same date as this application.
  • This application discloses a method of packet marking for a HQoS scheme that ensures weighted fairness among a plurality of subflows in a TA.
  • This application is incorporated herein in its entirety by reference.
  • an aggregate TVF is defined for the aggregate of all subflows in a TA and packets from multiple subflows are marked at a single point based on the aggregate TVF.
  • the packet marker takes a uniform random rate r as an input and computes an adjusted rate based on normalized weights for each subflow.
  • the adjusted rate is then used with the aggregate TFV to determine the PV of the packet.
  • One aspect of the present disclosure is to provide a simple solution for implementing any number of policy levels in a HQoS hierarchy by marking packets at a single location.
  • a packet marker at a single point e.g., gateway
  • the HQoS policy is then realized by using simple PPV schedulers at the bottlenecks in the network.
  • the marker graph 60 includes a source node 62, a plurality of intermediate nodes corresponding to the SP and WFQ marker components, and a marker node 66.
  • the intermediate nodes are referred to herein as rate transformation node 64s for reasons that will become apparent.
  • the source node 62 of the graph determines a random bitrate for a packet in a packet flow. That random bitrate is routed through the marker graph 60 from the source node 62 through one or more rate transformation node 64s to the marker node 66.
  • the random bitrate is transformed at each rate transformation node 64 according to the existing WFQ and SP components.
  • the marker node 66 also referred to as the TVF node, uses the transformed rate received as input to the marker node 66 to determine the PV.
  • a HQoS scheduler 50 with a hierarchy of WFQ and SP schedulers can be translated into a marker graph 60 by replacing each scheduler with a marker component, which is represented in the graph as a rate transformation node 64.
  • Figures 4 and 5 depict an example of a traditional HQoS scheduler 50 and its analogous HQoS marker graph 60. When comparing the two, it can be observed that the connections in the traditional HQoS scheduler 50 are the same as the connections in the marker graph 60. It can be also observed that each unique path through the marker graph 60 corresponds to a packet flow path in the HQoS scheduler 50 and represents a sequence of rate transformation to be applied to the packet flow. This sequence of rate transformations encodes the policies applied to that packet flow. As an example, the shaded components and dotted lines in Figures 4 and 5 show the path of a packet belonging to subflow 3.
  • the marker graph 60 comprises a source node 62, a plurality of rate transformation nodes 64 and a marker node 66 as previously described.
  • the source node 62 selects a uniform random rate r for each packet arriving at the network node (e.g. gateway 20).
  • the rate transformation node 64s 64 are configured as either SP nodes or WFQ nodes depending on the operator policy. In this example, there are four rate transformation nodes 64, each corresponding to one of the schedulers shown in Figure 4.
  • the rate transformation nodes 64 are denoted as WFQ(1 ), WFQ(2), SP(3) and WFQ(4).
  • the output of the final rate transformation node 64, WFQ(4), is connected to the marker node 66, which is preconfigured with the ATVF for the six packet flows.
  • the marker node 66 is configured to mark the packet with a PV based on the transformed rate received from WFQ(4) and the ATVF.
  • Figure 5 also illustrates a selection node 70 and measurement node 70 for completeness.
  • the selection node 70 determines a path through the marker graph 60 through which the uniform random rate will be propagated based on the packet flow ID.
  • Each incoming packet flow maps to a particular input at a particular rate transformation node 64.
  • the selection node70 selects the rate transformation node 64 and input to which the uniform random rate is sent based on the packet flow ID.
  • the measurement node 70 is not directly involved in packet marking, but rather is used to update the states of the rate transformation nodes 64 periodically or from time to time.
  • the measurement node obtains measurements 5, for each of the incoming packet flows and forwards these measurements to respective inputs at respective rate transformation nodes 64 during a procedure to update the states of the rate transformation nodes 64.
  • the update procedure is described in more detail below.
  • Each rate transformation node 64 takes a rate value as input and calculates a transformed rate as output.
  • [GA,] is the probability that T is selected (the packet to be marked belongs to this traffic aggregate) and the input rates for each TA i are chosen uniformly at random from the range [0,5,] (5, is the instant rate of TA i )
  • the output rates of WFQ or SP components follow a uniform distribution over the range °. ⁇ ; .- where represents the total rate of all the traffic flowing through the given component.
  • this traffic can also be considered as a traffic aggregate at a higher level and other WFQ or SP components can be applied on it.
  • HQoS policies are encoded as a sequence of rate transformations and the marker graph 60 translates the HQoS policy into a single PV.
  • the starting node represents a random rate selection from range [0,5,] and this node has n outgoing edges to rate transformation nodes 64.
  • the HQoS packet marker routes a uniform random rate selected for a packet through the marker graph 60 along a path selected based on the packet flow identifier and transforms the random rate at each rate transformation node 64 along the selected path.
  • the packet is marked using an aggregate TVF for the entire set of packet flows.
  • the hierarchical schedulers at the bottlenecks in the network can be replaced by simple PPV schedulers.
  • HQoS packet marking starts by identifying the packet flow to which the packet belongs. Then a random rate r is determined, which is a uniform random number in [0,S,] .
  • This rate r is transformed (one or more times) by routing it through the marker graph 60 according to the packet flow identifer (ID) until the marker node 66 of the graph is reached and the PV is determined. Dashed lines in Figure 5 show how r is repeatedly transformed for packet flow 3.
  • the local index Z at each rate transformation node 64 is determined based on the input port of the node at which r arrives. Note that the numbering of the input ports Z is the same as for the HQoS scheduler 50.
  • the marker node 66 marks the packet with a PV based on the final transformed rate output the last marker component, e.g., rate transformation node.
  • Figures 6 - 10 illustrates exemplary pseudo code for implementing a packet marker using a marker graph 60 as herein described.
  • the variables used have the following meanings:
  • n is a number of flows
  • m is an index for the nodes in the marker graph 60
  • B j indicates the starting point of region j ;
  • S j indicates the width of region j ;
  • R is a rate determination matrix
  • W is a normalized weight matrix
  • • o is a reordering vector indicating a rank of the subflows at a node m in the marker graph 60.
  • Processing begins at the source node 62, denoted 5 (line 2).
  • a random rate r is selected for a packet arriving at the source node 62 (line 3) and the next node in the path is computed based on the packet flow index i or other packet flow ID (line 4).
  • Lines 5-7 describe the rate transformations performed as rate r is propagated through the marker graph 60 along the selected path.
  • the input rate r is adjusted based on the input port l at which the input rate r in was received (line 6). The next node in the path is then calculated
  • Code listing 2 shown in Figures 7, illustrates an exemplary apply function for a rate transformation node 64 implementing SP scheduling for the packet flows flowing through that node.
  • a rate transformation node 64 implementing SP allocates resources to subflows based on absolute priority.
  • the rate measurement node takes r in as an input and returns a transformed rate r out as output.
  • the rate transformation node 64 adds a rate offset L, 1 to the input rate r in to yield the output rate r out , where / is the local index of the subflow in priority order.
  • the rate transformation nodes 64 is configured with a vector of rate offsets for each subflow traversing that node.
  • the rate offsets constitute state information representing a current state of the rate transformation node 64. This state information is updated periodically as herein described based on measured rates 5, of the incoming packet flows.
  • Code listing 3, shown in Figure 8, illustrates an exemplary apply function for a rate transformation node 64 implementing WFQ scheduling for the packet flows flowing through that node.
  • a rate transformation node 64 implementing WFQ allocates resources to subflows taking part in the scheduling according to the weights of the subflows.
  • the rate transformation node 64 takes a local TA index / and a rate r in as inputs and returns a transformed rate r out as output.
  • the TA index / is a local index that identifies the input port on which the rate r in is received.
  • the TA index / is in the range [1, «] where n is the number of TAs flowing through the rate transformation node 64.
  • the transformed rate r out is in the range
  • the range of possible values of r out is divided into n regions equal to the number of TAs flowing through the rate transformation node 64.
  • Each region j is associated with a starting value B l .
  • the rate transformation node 64 is configured with a reordering vector o , a region determination matrix R and a normalized weight matrix . W .
  • the reordering vector comprises a vector of the TA indices l in ranked order based on a ratio S l /w l , where S t is the measured rate of the sublfow / and w t is a weight assigned by the operator.
  • the region determination matrix R comprise a matrix used to map the input rate r in to corresponding values in a region j in the possible range of throughput values.
  • Each element R j t indicates when region j is used for sublfow o n whereo, indicates a subflow / at the / th position in the reordering vector. In this case, the index i indicates a rank of the subflow.
  • the normalized weight matrix W provides normalized weights for each sublfow o t in each region j .
  • Each element W j t is a normalized weight for sublfow o i in region j .
  • the reordering vector o , region determination matrix R and a normalized weight matrix W comprise state information representing a current state of the rate transformation node 64. This state information is updated periodically based on measured rates 5, of the packet flows as hereinafter described.
  • the rate transformation node 64 determines the rank i of the sublfow / based on the reordering vector (line 2). The rate transformation node 64 then maps the input rate r in to a region j based on the position i of the subflow in the reordering vector (line 3). Based on the mapping, the rate transformation node 64 computes an adjusted rate r out (line 4). More particularly, the rate transformation node 64 subtracts R j- from the input rate r in and divides the result by W hi to get a weighted rate, which is then added to B which is the starting throughput value for the region j .
  • the state information for the rate transformation nodes 64 is updated periodically based on rate measurements [S j ,S 2 ,...S for the packet flows.
  • the rate measurements [S j ,S 2 ,...S are not propagated with every packet to the rest of marker graph 60, but rather are used to update the marker states of the rate transformation nodes 64.
  • each S t is propagated along a respective path of the marker graph 60.
  • Each rate transformation node 64 treats the incoming S, according to the local index of the input port and propagates the sum of all incoming S t s at its outgoing edge.
  • Each rate transformation node 64 updates its internal state based on the S, s received on its respective inputs. Afterward, each node summarizes the incoming rate measurement and propagates the sum on its outgoing edge to the next node. A node only performs state update, when all the rate measurements are available in its input ports.
  • This update can be implemented in an independent control thread/control node.
  • the marker graph 60 is preconfigured to that node. Periodically the node reads the rate measurements, calculates the updated internal state of the marker graph 60 nodes and updates them in a coordinated manner.
  • Code listing 4, shown in Figure 9, illustrates an update procedure for updating the rate offsets used by a rate transformation node 64 to implement SP scheduling.
  • the update routine receives subflow rate measurements [S j ,S 2 ,...S for the TAs at each input and uses the subflow rate measurements to compute a vector of rate offsets [ , ' ⁇ -i] ⁇ Note that the subflow rate measurements are for the subflows at respective inputs and are received from a preceding node.
  • the rate offsets are ordered in priority order of the subflows.
  • the rate offset for the highest priority flow is set to 0 so that the output rate for the highest priority flow is equal to the input rate (line 2).
  • the rate measurement S is added to the previously computed rate offset L,_ 1 to obtain the next rate offset L, (lines 3-4).
  • the rate offsets ensure that the random rates assign to each sublfow are bounded to a specific range.
  • the range is [0,S j ]
  • the range is [5 0 ,S 2 ] and so forth up to subflow n.
  • the range is [S ⁇ ,S n ] .
  • Code listing 5, shown in Figure 10 illustrates exemplary processing for updating state information used by a rate transformation node 64 to implement SP scheduling.
  • the update routine receives subflow rate measurements [S j ,S 2 ,...S for the packet flows at each input. Note that the subflow rate measurements are for the subflows at each input port / to that node and are received from a preceding node.
  • the rate measurement node calculates the reordering matrix o based on the subflow rate measurements [S j ,S 2 ,...S and a weight vector w provided by the operator (Proc. 5, lines 1 and 2).
  • the reordering vector o arranges the indices of the subflows in order based on a ratio S l /w l , where S, is the measured rate of the sublfow / and w t is a corresponding weight w t assigned by the operator.
  • the weight vector w may be configured as part of the resource sharing policy of the network. Accordingly, these weights may be changed when the network implements new resource sharing policies. That said, these weights are expected to generally change infrequently, if at all.
  • the starting value B 0 for the first of n regions is set (line 4).
  • the rate transformation node 64 calculates the normalized weight matrix W (lines 7-9). For each region j , the rate transformation matrix computes the width d and boundary B j of the region j based on the normalized weights (lines 11-12). Additionally, the rate transformation node 64 calculates the rate determination matrix R (lines 13-14).
  • the rate transformation nodes 64 are updated periodically or from time to time based on rate measurements S t of the subflows received at each input port /. It receives rates r in from 0 to S, on each input port / and produces a rate r out from 0 to at its output, where n is the number of subflows. If the rate transformation node 64 receives uniform random rates on its inputs, it produces uniform random rates in its outputs (assuming that packets are of equal size). If packets vary in size, each sample in the distribution can be weighted based on the size of the packet to which it belongs.
  • a simple example may help to illustrate the operation of the packet marker as herein described.
  • This example illustrates operation of the marker graph 60 shown in Figure 5, which is based on the HQoS scheduler 50 in Figure 4.
  • the scheduler implements a resource sharing policy for 6 packet flows using four scheduling queues denoted WFQ(1), WFQ(2), SP(3) and WFQ(4).
  • the rates and relative weights for each packet flow arriving at the scheduler are shown in Table 1 below.
  • Flows 2 and 3 are connected to WFQ(1).
  • Flows 4, 5 and 6 are connected to WFQ(2).
  • Flow 1 and the output of WFQ(2) are connected to SP(3).
  • the outputs of SP(3) and WFQ(2) are connected to WFQ(4).
  • the weights assigned to the queues at each scheduler are shown in Table 2.
  • FIGs 11A-11 D illustrates resource sharing based on the policy embodied in Tables 1 and 2.
  • WFQ(1) divides the throughput range of [0, 5] into two regions as shown in Figure 11 A. Based on the ratios S l /w l of the subflows computed from Table 1 , packet flow 2 is allocated resources in region 1 (R1) and packet flow 3 is allocated resources in both region 1 (R1) and region 2 (R2). In R1 , resources are shared between flows 2 and 3 at a ratio of 2 to 1 based on Table 2. In R2, all the resources are used by flow 3. The total rate for flow 2 is 2 Mbps. The total rate for flow 3 is 3 Mbps.
  • WFQ(2) divides the throughput range of [0, 12] into three regions as shown in Figure 11 B.
  • packet flow 5 is allocated resources in region 1 (R1)
  • packet flow 4 is allocated resources in regions 1 and 2 (R1 and R2)
  • packet flow 6 is allocated resources in all three regions.
  • R1 resources are shared between flows 4, 5 and 6 in the ratio 2:1 :1 based on Table 2.
  • R2 resources are shared between flows 4 and 6 in the ratio 2:1 based on Table 2.
  • region 3 all the resources are used by flow 6.
  • the total rate for flow 4 is 6 Mbps.
  • the total rate for flow 5 is 2 Mbps.
  • the total rate for flow 6 is 4 Mbps.
  • SP(3) divides the throughput range of [0, 10] into two regions as shown in Figure 11 C.
  • R1 all resources are used by flow 1 , which is a priority flow.
  • region 2 all resources are used by the TA output from WFQ(1 ).
  • the total rate for flow 1 is 5 Mbps.
  • the total rate for combined flows 2 and 3 is also 5 Mbps.
  • WFQ(4) divides the throughput range of [0, 22] into two regions as shown in Figure 11 D.
  • the TA from WFQ(2) is allocated resources in region 1 (R1) and the TA from SP(3) is allocated resources in both regions.
  • R1 resources are shared between the TA from SP(3) and the TA from WFQ(2) in a ratio of 1 :2 based on Table 2.
  • the TA from SP(3) is allocated 6 Mbps and the TA from WFQ(2) is allocated 12 Mbps.
  • the allocation for the TA from SP(3) is consumed primarily by flow 1 , which is a priority flow leaving only 1 Mbps for flows 2 and 3.
  • the resources in R2 are allocated entirely to flows 2 and 3 in SP(3) so the total allocation to flow 2 is 2 Mbps and the total allocation to flow 3 is 3 Mbps.
  • FIG 12 illustrates an exemplary method 100 of packet marking for multiple hierarchical levels.
  • the packet marker obtains an Aggregate Throughput Value Function (ATVF) that maps throughput values to PVs for a plurality of packet flows (block 110).
  • the packet marker further obtains a marker graph 60 that encodes resource sharing policies for a HQoS hierarchy for the plurality of packet flows as sequences of rate transformations, wherein each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph 60 from a source node 62 through one or more rate transformation nodes 64 to a marker node 66 (block 120).
  • the packet marker receives a packet associated with one of the packet flows (block 130) and marks the packet with a PV based on a selected path through the marker graph 60 and the ATVF (block 140).
  • marking the packet with a PV based on a selected path through the marker graph 60 and the ATVF comprises randomly selecting an initial rate for the packet, selecting a path corresponding to one of the sequences of rate transformations based on a flow identifier for the packet flow, applying the selected sequence of rate transformations to transform the initial rate to a transformed rate, and marking the packet with a PV determined based on the transformed rate and ATVF.
  • applying the selected sequence of rate transformations to transform the initial rate to a transformed rate comprises, for each of one or more rate transformation nodes 64 in the selected path, receiving an input rate from a preceding node, wherein the preceding node comprises the source node 62 or a preceding rate transformation node 64, transforming the input rate to a transformed rate based on an input over which the input rate was received, and outputting the transformed rate to a succeeding node, wherein the succeeding node comprises a succeeding rate transformation node 64 or the marker node 66.
  • the initial rate is a rate selected randomly from a predetermined range determined based on the flow identifier.
  • the initial rate is a uniform random rate.
  • Some embodiments of the method further comprise periodically receiving rate measurements for the plurality of packet flows and, for each rate transformation node 64, periodically updating a rate transformation configuration for the rate transformation node 64 based on the rate measurements.
  • updating the rate transformation configuration comprises, for at least one rate transformation node 64, updating state information used by the rate transformation node 64 to input rates received on different inputs to the rate transformation node 64.
  • updating state information used by the rate transformation node 64 to transform input rates received on different inputs to the rate transformation node 64 comprises, for each input, computing a weight matrix and a rate determination matrix used for implementing weighted fair queuing
  • updating state information used by the rate transformation node 64 comprises, for at least one rate transformation node 64, updating rate offsets applied to input rates received on the inputs to the rate transformation node 64 based on priorities associated with inputs.
  • FIG. 13 illustrates an exemplary network node 200 according to an embodiment configured for marking packets.
  • the network node 200 comprises a TFV unit 210, a marker graph (MG) unit 220, a receiving unit 230, a marking unit 240 and an optional output unit 250.
  • the various units 210-250 can be implemented by one or more microprocessors, microcontrollers, hardware circuits, software, or a combination thereof.
  • the ATVF unit 210 is configured to obtain an Aggregate Throughput Value Function (ATVF) that maps throughput values to PVs for a plurality of packet flows.
  • the marker graph 60 (MG) unit 220 is configured to obtain a marker graph 60 that encodes resource sharing policies for a HQoS hierarchy for the plurality of packet flows as sequences of rate transformations, wherein each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph 60 from a source node 62 through one or more rate transformation nodes 64 to a marker node 66 (block 120).
  • the receiving unit 230 is configured to receive a packet associated with one of the packet flows.
  • the marking unit 240 is configured to mark the packet with a PV based on a selected path of the marker node 66 and the ATVF.
  • the output unit 250 when present, is configured to output the marked packet.
  • the marking unit 240 comprises a rate selection unit 260, a rate transformation unit 270 and a valuation unit 280.
  • the rate selection unit 260 is configured to select a uniform random rate for the received packet as an input rate.
  • the rate transformation unit 270 is configured to apply a sequence of rate transformations to the input rate to compute a transformed rate based on the HQoS policies embodied in the marker graph 60.
  • the rate selection sequence is selected based on a flow identifier associated with the packet flow to which the packet belongs. This is equivalent to selecting a path through the marker graph 60.
  • the packet valuation unit 280 is configured to determine a PV for the packet based on an aggregate TFV for all of the packet flows.
  • FIG. 14 Other embodiments include a computing device 300 (e.g., network node) configured for packet marking.
  • the computing device 300 may perform one, some, or all of the functions described above, depending on the embodiment.
  • the computing device 300 is implemented according to the hardware illustrated in Figure 14.
  • the example hardware of Figure 14 comprises processing circuitry 320, memory circuitry 330, and interface circuitry 310.
  • the processing circuitry 320 is communicatively coupled to the memory circuitry 330 and the interface circuitry 310, e.g., via one or more buses.
  • the processing circuitry 320 may comprise one or more microprocessors, microcontrollers, hardware circuits, discrete logic circuits, hardware registers, digital signal processors (DSPs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), or a combination thereof.
  • the processing circuitry 320 may comprise a first processing circuit and a second processing circuit that are capable of executing functions in parallel.
  • the processing circuitry 320 may be programmable hardware capable of executing software instructions stored, e.g., as a machine-readable computer program 340 in the memory circuitry 330.
  • the memory circuitry 330 may comprise any non-transitory machine-readable media known in the art or that may be developed, whether volatile or non-volatile, including but not limited to solid state media (e.g., SRAM, DRAM, DDRAM, ROM, PROM, EPROM, flash memory, solid state drive, etc.), removable storage devices (e.g., Secure Digital (SD) card, miniSD card, microSD card, memory stick, thumb-drive, USB flash drive, ROM cartridge, Universal Media Disc), fixed drive (e.g., magnetic hard disk drive), or the like, wholly or in any combination.
  • solid state media e.g., SRAM, DRAM, DDRAM, ROM, PROM, EPROM, flash memory, solid state drive, etc.
  • removable storage devices e.g., Secure Digital (SD) card, miniSD card
  • the interface circuitry 310 may be a controller hub configured to control the input and output (I/O) data paths of the computing device 300. Such I/O data paths may include data paths for exchanging signals over a communications network.
  • the interface circuitry 310 may comprise one or more transceivers configured to send and receive communication signals over one or more packet-switched networks, cellular networks, and/or optical networks.
  • the interface circuitry 310 may be implemented as a unitary physical component, or as a plurality of physical components that are contiguously or separately arranged, any of which may be communicatively coupled to any other, or may communicate with any other via the processing circuitry 320.
  • the interface circuitry 310 may comprise output circuitry (e.g., transmitter circuitry configured to send communication signals over a communications network) and input circuitry (e.g., receiver circuitry configured to receive communication signals over the communications network).
  • output circuitry e.g., transmitter circuitry configured to send communication signals over a communications network
  • input circuitry e.g., receiver circuitry configured to receive communication signals over the communications network
  • the processing circuitry 320 is configured to perform the method 200 illustrated in Figure 12, the method 100 illustrated in Figure 11.
  • a computer program comprises instructions which, when executed on at least one processor of an apparatus, cause the apparatus to carry out any of the respective processing described above.
  • a computer program in this regard may comprise one or more code modules corresponding to the means or units described above.
  • Embodiments further include a carrier containing such a computer program.
  • This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above.
  • Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device.
  • This computer program product may be stored on a computer readable recording medium.
  • Packet marking based on the marker graph 60 can be performed fast. After marking, packets can be put into a scheduler and the scheduler can be implemented as a simple PPV scheduler. No modifications to the scheduler are required to implement HQoS. Scheduling can be performed independent of the number of flow and without knowledge of the HQoS hierarchy or resource sharing policies at the scheduler.
  • Packet marking based on the marker graph 60 encodes the entire HQoS hierarchy into a single PV. HQoS policy is determined by the TVF. Packet marking as herein described is independent per TVF and can be parallelized.
  • Packet marking as herein described can be combined with the remarker solution in Resource Sharing in a Virtual Networking setting both in the input side (by changing how r is determined based on the incoming PV) and the output side (by using the calculated PV for the remarker).
  • a complex HQoS hierarchy e.g., slicing
  • the implementation of the whole HQoS hierarchy can be optimized based on processing capabilities and information availability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Methods and apparatus for packet marking for a Hierarchical Quality of Service (HQoS) is provided to control resource sharing among a plurality of network flows with differentiated services. A packet marker at a single point (e.g., gateway) encodes the resource sharing policy for a plurality of packet flows into a single packet value. The HQoS policy is then realized by using simple PPV schedulers at the bottlenecks in the network.

Description

MARKER GRAPH FOR HQoS
TECHNICAL FIELD
The present disclosure relates generally to resource sharing among a plurality of packet flows and, more particularly, to a packet marking for implementing Hierarchical Quality of Service (HQoS) to control resource sharing and quality of service (QoS) for a plurality of packet flows.
BACKGROUND
Communication networks are shared among a wide variety of applications and services with different requirements. Some applications require low latency and high throughput while other applications and services require best effort only. At the same time, sharing of network resources by different operators is becoming more common. Network slicing is a solution for sharing resources between operators that can also accommodate the widely varying Quality of Service (QoS) requirements of different users. The general idea underlying network slicing is to separate traffic into multiple logical networks that share the same physical infrastructure. Each logical network is designed to serve a specific purpose and comprises all the network resources required for that specific purpose. Network slices can be implemented for each operator and for each service provided by the operator.
The heterogenous traffic mix comprising different flows for different users carried by different network operators and with different QoS requirements poses a challenge for access aggregation networks (AANs). The network needs to ensure that network resources are shared fairly between different flows while maintaining the required QoS for each flow. Without some form of direct resource sharing control, the result will be unfairness in the treatment of different flows.
Most networks rely on a few simple mechanisms to approximate flow fairness. For example, Transmission Control Protocol (TCP) has some limited congestion control mechanisms built in. Despite these existing mechanisms, new congestion controls and heterogenous Round Trip Times (RTTs) often result in unfairness among flows anyway. Further, these limited mechanisms are often unable to prevent user with several flows from dominating resource usage over a single bottleneck.
Another simple approach that attempts to ensure that certain traffic is provided with at least a minimum level of QoS is by implementing a static reservation solution. Static reservation traditionally requires defining in advance the bitrate share of each user’s combined traffic.
Because users often have highly variable utilization, a static reservation approach often results in high amounts of unused resources.
In comparison to these legacy approaches, Hierarchical Quality of Service (HQoS) by Scheduling, a technique for resource sharing and QoS management, can implement a richer and more complex set of resource sharing policies. HQoS uses a scheduler and many queues to implement and enforce a resource sharing policy among different traffic aggregates (TAs) and among different flows within a TA. The HQoS approach organizes managed elements of the network into a hierarchy and applies QoS rules at each level of the hierarchy in order to create more elaborate, refined, and/or sophisticated QoS solutions for shared resource management.
With HQoS, resource sharing can be defined among several TAs at different hierarchical levels, e.g., among operators, network slices, users and subflows of a user. HQoS can also be used to realize statistical multiplexing of a communication link.
HQoS is complex and requires configuration at each bottleneck in a network. With the evolution of Fifth Generation (5G) networks and optical fiber for the last hop, bottlenecks will become more likely at network routers. The traffic at these routers is heterogenous considering congestion control mechanisms and round trip time (RTT). The traffic mix is also constantly changing. Controlling resource sharing at these bottlenecks can significantly improve network performance and perceived QoS.
A technique that is often used in conjunction with HQoS is known as packet marking.
Packet marking involves adding information to a packet for potential use by downstream devices and/or processing. For example, an edge router may use packet marking to insert a packet value (PV) into a packet that indicates that packet’s importance in the traffic mix at the edge of the network. The PV may then be used by schedulers in other network nodes along the path traversed by the packet to ensure that the packet is prioritized based on its PV as it traverses the network towards its destination. Packet marking has proven to be a useful technique to enable effective bandwidth sharing control and traffic congestion avoidance within a network.
A core stateless resource sharing mechanism called Hierarchical Per Packet Values (HPPV) implements HQoS by only modifying packet marking algorithms without any changes to the schedulers in the network nodes. In this approach, the resource sharing policies between different TAs are defined by the packet marking strategy. No knowledge of the resource sharing policies is required by the scheduler. With this approach, HQoS can be implemented with a simple scheduler that determines the handling of a packet based only on its PV. An advantage of this approach is that new policies can be introduced by reconfiguring packet marking without making any changes to the scheduler.
A related application titled “ HQoS Marking For Many Sublows" discloses a method of packet marking for a HQoS scheme that ensures weighted fairness among a plurality of subflows in a TA. This application is incorporated herein in its entirety by reference. Currently, there is no known method of marking packets in a manner that ensures weighted fairness among different flows at more than two hierarchical layers. SUMMARY
The present disclosure relates generally to packet marking for a Hierarchical Quality of Service (HQoS) to control resource sharing among a plurality of packet flows with differentiated services. A packet marker at a single point (e.g., gateway) encodes the resource sharing policy for a plurality of packet flows into a single packet value. The HQoS policy is then realized by using simple PPV schedulers at the bottlenecks in the network.
To implement packet marking for HQoS, a hierarchy of weighted fair queuing (WFQ) and strict priority (SP) marker components are organized into a marker graph. Other types of marker components could also be used. The marker graph includes a source node, a plurality of intermediate nodes corresponding to the SP and WFQ marker components, and a marker node. The intermediate nodes are referred to herein as rate transformation nodes. The source node of the marker graph determines a random bitrate for a packet flow. That random bitrate is routed through the marker graph from the source node through one or more rate transformation nodes to the marker node. The random rate is transformed at each rate transformation node according to the existing WFQ and SP components. The marker node uses the transformed rate received as input to the final marker node to determine the packet value.
A first aspect of the disclosure comprises methods of marking packets with a packet value to implement HQoS. In one embodiment, the method comprises obtaining an Aggregate Throughput Value Function (ATVF) that maps throughput values to packet values for a plurality of packet flows. The method further comprises obtaining a marker graph that encodes resource sharing policies for a Hierarchical Quality of Service (HQoS) hierarchy for the plurality of packet flows as sequences of rate transformations. Each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph from a source node through one or more rate transformation nodes to a marker node. The method further comprises receiving a packet associated with one of said packet flows and marking the packet with a packet value based on a selected path through the marker graph and the ATVF.
A second aspect of the disclosure comprises a network node configured to perform packet marking to implement HQoS. In one embodiment, the network node is configured to obtain an Aggregate Throughput Value Function (ATVF) that maps throughput values to packet values for a plurality of packet flows. The network node is further configured to obtain a marker graph that encodes resource sharing policies for a Hierarchical Quality of Service (HQoS) hierarchy for the plurality of packet flows as sequences of rate transformations. Each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph from a source node through one or more rate transformation nodes to a marker node. The network node is further configured to receive a packet associated with one of said packet flows and marking the packet with a packet value based on a selected path through the marker graph and the ATVF. A third aspect of the disclosure comprises a network node configured to perform packet marking to implement HQoS. The network node comprises interface circuitry for receiving and sending packets and processing circuitry. The processing circuitry is configured to obtain an Aggregate Throughput Value Function (ATVF) that maps throughput values to packet values for a plurality of packet flows. The processing circuitry is further configured to obtain a marker graph that encodes resource sharing policies for a Hierarchical Quality of Service (HQoS) hierarchy for the plurality of packet flows as sequences of rate transformations. Each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph from a source node through one or more rate transformation nodes to a marker node. The processing circuitry is further configured to receive a packet associated with one of said packet flows and marking the packet with a packet value based on a selected path through the marker graph and the ATVF.
A fourth aspect of the disclosure comprises a computer program comprising executable instructions that, when executed by a processing circuitry in a network node, causes the network node to perform the method according to the first aspect.
A fifth aspect of the disclosure comprises a carrier containing a computer program according to the fourth aspect, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates an AAN implementing HQoS as herein described.
Figure 2 schematically illustrates a HQoS scheduler with multiple hierarchical levels.
Figure 3 illustrates TVFs for four packet flows with different QoS requirements.
Figure 4 illustrates an exemplary HQoS scheduler for six packet flows.
Figure 5 illustrates a marker graph corresponding to the HQoS scheduler in Figure 4.
Figure 6 illustrates exemplary pseudocode for packet marking at a single point to implement HQoS.
Figure 7 illustrates exemplary pseudocode for rate transformation by a rate transformation node implementing SP scheduling.
Figure 8 illustrates exemplary pseudocode for rate transformation by a rate transformation node implementing WFQ scheduling.
Figure 9 illustrates exemplary pseudocode for updating a rate transformation node implementing SP scheduling.
Figure 10 illustrates exemplary pseudocode for updating a rate transformation node implementing WFQ scheduling.
Figures 11 A - 11 D illustrate one example resource management using a marker graph.
Figure 12 illustrates a method of packet marking according to an exemplary embodiment. Figure 13 is an exemplary network node configured to implement packet marking as herein described.
Figure 14 is an exemplary computing device configured to implement packet marking as herein described.
DETAILED DESCRIPTION
Referring now to the drawings, the present disclosure will be described in the context of an access aggregation network (AAN) 10 implementing HQoS. Those skilled In the art will appreciate, however, that the disclosed embodiment is presented for purposes of explanation and the principles herein described are more generally applicable to other networks implementing HQoS.
As shown in Figure 1 , packets comprising a multitude of different packet flows from a multitude of different sources 12 and with varying QoS requirements arrive a gateway 20 of the AAN 10. The packets may be any of a variety of different types. Examples of the most common types of packets include Internet Protocol (IP) packets (e.g., Transmission Control Protocol (TCP) packets), Multiprotocol Label Switching (MPLS) packets, and/or Ethernet packets. Among other things, the packets may comprise one or more fields for storing values used by the network 10 in performing packet processing (e.g., deciding whether to forward, queue, or drop packets). These fields may be in either a header or payload section of the packet, as may be appropriate.
The gateway 20 classifies the packets and a packet marker 30 assigns a packet value (PV) to each packet according to its relative importance. Though shown separately, a packet marker 30 may be located at the gateway 20. Also, there may be multiple packet markers 30 located at different network nodes. The packets are forwarded through a series of switches or routers 40 towards their respective destinations. When congestion is detected, the switches 40 at the bottlenecks implement a congestion control mechanism to drop some packets in order to alleviate the congestion. Generally, a scheduler at the switch determines what packets to forward and what packets to drop based on the PV assigned to the packet by the packet marker 30. Generally, packets with a higher PV are less likely to be dropped than packets with a lower PV.
In exemplary embodiments described herein, a packet marking strategy is used to implement HQoS. HQoS is a technology for implementing complex resource sharing policies in a network through a queue scheduling mechanism. The HQoS scheme ensures that all packet flows will be allocated resources during periods of congestion according to policies established by the operator so that packet flows of relatively low importance are not starved during periods of congestion by packet flows of higher importance.
Figure 2 illustrates a typical implementation of an HQoS scheduler. The HQoS scheduler comprises a hierarchy of physical queues and schedulers configured to share resources among TAs at different levels. At each level, one or more schedulers implement resource sharing policies defined by the network operator for TAs of the same level. For example, a level may comprise TAs for different operators, different leased transports, different network slices, different users or different subflows of a user. At the “operator” level, for example, the scheduler implements policies for sharing resources among different operators. For example, the resources for two operators, denoted Operator A and Operator B, can be shared at a 3-to-1 ratio. Each operator may have several leased transports that are assigned shares of the overall resources assigned to that operator. A scheduler at the “leased transport” level implements policies for resource sharing between leased transports. A leased transport may, in turn, be virtualized to carry traffic for a number of network slices. A scheduler at the “network slice” level implements policies for resource sharing resources among different network slices. Each network slice, in turn, may serve a plurality of subscribers (or groups thereof). Resource sharing among classes of subscribers of each of the network slices is implemented by a scheduler at the “subscriber” level. Finally, a scheduler at the “packet flow” level may implement resource sharing among the different flows of each subscriber in order to allocate different amounts of resource to flows of different types (e.g., to provide more resources to high priority flows relative to low priority flows). For example, a gold class of subscribers may be given a greater share of the shared resources than a silver class of subscribers, and streaming traffic may be given a greater share of the shared resources than web traffic. By defining resource sharing polices at each level, it is possible to implement complex resource sharing strategies.
The scheduler at each level implements a queue scheduling algorithm to manage the resource sharing and schedule packets in its queues. When congestion occurs, the queue scheduling mechanism provides packets of a certain type with desired QoS characteristics such as the bandwidth, delay, jitter and loss. The queue scheduling mechanism typically works only when the congestion occurs. Commonly used queue scheduling mechanisms include Weighted Fair Queuing (WFQ), Weighted Round Robin (WRR), and Strict Priority (SP). WFQ is used to allocate resources to queues taking part in the scheduling according to the weights of the queues. SP allocates resources based on the absolute priority of the queues.
Any of the schedulers may apportion the shared resources in any manner that may be deemed appropriate. For example, while the resources were discussed above as being shared between two operators at a ratio of three-to-one, one of the operators may operate two network slices, and the HQoS resource sharing scheme may define resource sharing between those two slices evenly (i.e., at a ratio of one-to-one). Subsequent schedulers may then further apportion the shared resources. For example, the HQoS resource sharing scheme may define resource sharing for both gold and silver subscribers at a ratio of two-to-one, respectively. Further still, the HQoS resource sharing scheme may define resource sharing for web flows and download flows at a ratio of two-to-one. Thus, each WFQ may be designed to make a suitable apportionment of the shared resources as may be appropriate at its respective level of the hierarchy. In one approach to scheduling, known as per PV (PPV), packets are marked with a PV that expresses the relative importance of the packet flow to which the packet belongs and the resource sharing policies between different TAs are determined by the PV assigned to the packets. In this approach, the assigned PVs are considered by the scheduler at a bottleneck in the network to determine what packets to forward. Generally, packets with higher value are more likely to be forwarded during periods of congestion while packets with lower value are more likely to be delayed or dropped or delayed.
The PPV approach uses Throughput Value Functions (TPVs) to determine resource sharing policies among different TAs. A TVF is used by the packet marker to match a throughput value to a PV. Figure 3 illustrates examples of four TVFs for gold users (G), silver users (S), voice (V) and background (B). Each TVF defines a relation between PV and throughput. In each TVF, higher throughput maps to lower PVs.
The packet marker 30 at the gateway 20 or other network node uses the TVF to apply a PV to each packet and the scheduler at a bottleneck uses the PV to determine what packets to schedule. For each packet, the packet marker selects a uniform random rate r between 0 and the maximum rate for the TA. The packet marker then determines the PV based on the selected rate r and the TVF for the TA. The assignment of a PV based on a uniform random rate ensures that all TAs will be allocated resources during periods of congestion. Packets marked with a gold TVF, for example, will not always receive a higher PV than packets marked with a silver TVF, but will have a greater chance of receiving a higher PV.
In one implementation, a core stateless resource sharing mechanism called Hierarchical Per PVs (HPPV) implements HQoS by only modifying packet marking algorithms without any changes to the schedulers at the switches or routers 40. No knowledge of the resource sharing policies is required by the scheduler. With this approach, HQoS can be implemented with simple PPV schedulers that determine the handling of a packet based only on its PV. An advantage of this approach is that new policies can be introduced by reconfiguring packet marking without making any changes to the scheduler.
Another core stateless resource sharing solution is disclosed in a related application titled "HQoS Marking For Many Subflows", which is filed on the same date as this application. This application discloses a method of packet marking for a HQoS scheme that ensures weighted fairness among a plurality of subflows in a TA. This application is incorporated herein in its entirety by reference. In this application, an aggregate TVF is defined for the aggregate of all subflows in a TA and packets from multiple subflows are marked at a single point based on the aggregate TVF. For each packet, the packet marker takes a uniform random rate r as an input and computes an adjusted rate based on normalized weights for each subflow. The adjusted rate is then used with the aggregate TFV to determine the PV of the packet. One aspect of the present disclosure is to provide a simple solution for implementing any number of policy levels in a HQoS hierarchy by marking packets at a single location. A packet marker at a single point (e.g., gateway) encodes the resource sharing policy for a plurality of packet flows through the single point into a single PV. The HQoS policy is then realized by using simple PPV schedulers at the bottlenecks in the network.
To implement packet marking for multiple hierarchical levels in a HQoS hierarchy at single point, a hierarchy of WFQ and SP marker components are organized into a marker graph 60. The marker graph 60 includes a source node 62, a plurality of intermediate nodes corresponding to the SP and WFQ marker components, and a marker node 66. The intermediate nodes are referred to herein as rate transformation node 64s for reasons that will become apparent. The source node 62 of the graph determines a random bitrate for a packet in a packet flow. That random bitrate is routed through the marker graph 60 from the source node 62 through one or more rate transformation node 64s to the marker node 66. The random bitrate is transformed at each rate transformation node 64 according to the existing WFQ and SP components. The marker node 66, also referred to as the TVF node, uses the transformed rate received as input to the marker node 66 to determine the PV.
A HQoS scheduler 50 with a hierarchy of WFQ and SP schedulers can be translated into a marker graph 60 by replacing each scheduler with a marker component, which is represented in the graph as a rate transformation node 64. Figures 4 and 5 depict an example of a traditional HQoS scheduler 50 and its analogous HQoS marker graph 60. When comparing the two, it can be observed that the connections in the traditional HQoS scheduler 50 are the same as the connections in the marker graph 60. It can be also observed that each unique path through the marker graph 60 corresponds to a packet flow path in the HQoS scheduler 50 and represents a sequence of rate transformation to be applied to the packet flow. This sequence of rate transformations encodes the policies applied to that packet flow. As an example, the shaded components and dotted lines in Figures 4 and 5 show the path of a packet belonging to subflow 3.
The marker graph 60 comprises a source node 62, a plurality of rate transformation nodes 64 and a marker node 66 as previously described. The source node 62, as previously described selects a uniform random rate r for each packet arriving at the network node (e.g. gateway 20). The rate transformation node 64s 64 are configured as either SP nodes or WFQ nodes depending on the operator policy. In this example, there are four rate transformation nodes 64, each corresponding to one of the schedulers shown in Figure 4. The rate transformation nodes 64 are denoted as WFQ(1 ), WFQ(2), SP(3) and WFQ(4). The output of the final rate transformation node 64, WFQ(4), is connected to the marker node 66, which is preconfigured with the ATVF for the six packet flows. The marker node 66 is configured to mark the packet with a PV based on the transformed rate received from WFQ(4) and the ATVF. . Figure 5 also illustrates a selection node 70 and measurement node 70 for completeness. The selection node 70 determines a path through the marker graph 60 through which the uniform random rate will be propagated based on the packet flow ID. Each incoming packet flow maps to a particular input at a particular rate transformation node 64. Essentially, the selection node70 selects the rate transformation node 64 and input to which the uniform random rate is sent based on the packet flow ID. The measurement node 70 is not directly involved in packet marking, but rather is used to update the states of the rate transformation nodes 64 periodically or from time to time. The measurement node obtains measurements 5, for each of the incoming packet flows and forwards these measurements to respective inputs at respective rate transformation nodes 64 during a procedure to update the states of the rate transformation nodes 64. The update procedure is described in more detail below.
Each rate transformation node 64 takes a rate value as input and calculates a transformed rate as output. For a traffic mix with n traffic aggregates where [GA,] is the probability that T is selected (the packet to be marked belongs to this traffic aggregate) and the input rates for each TAi are chosen uniformly at random from the range [0,5,] (5, is the instant rate of TAi), the output rates of WFQ or SP components follow a uniform distribution over the range °.å; .- where represents the total rate of all the traffic flowing through the given component. In the case of equal sized packets
Figure imgf000010_0001
Note that this traffic can also be considered as a traffic aggregate at a higher level and other WFQ or SP components can be applied on it.
In this model, HQoS policies are encoded as a sequence of rate transformations and the marker graph 60 translates the HQoS policy into a single PV. The marker-graph comprises a directed acyclic graph G = (V,E) where V comprises the set of rate transformation nodes 64, source node 62 and marker node 66 and £ is a set of edges connecting the nodes. For packets of subflow i , the starting node represents a random rate selection from range [0,5,] and this node has n outgoing edges to rate transformation nodes 64.
Instead of routing a packet through a series of hierarchical queues at the scheduler, the HQoS packet marker routes a uniform random rate selected for a packet through the marker graph 60 along a path selected based on the packet flow identifier and transforms the random rate at each rate transformation node 64 along the selected path. Upon reaching the marker node 66, the packet is marked using an aggregate TVF for the entire set of packet flows. Based on this packet marking, the hierarchical schedulers at the bottlenecks in the network can be replaced by simple PPV schedulers. HQoS packet marking starts by identifying the packet flow to which the packet belongs. Then a random rate r is determined, which is a uniform random number in [0,S,] . This rate r is transformed (one or more times) by routing it through the marker graph 60 according to the packet flow identifer (ID) until the marker node 66 of the graph is reached and the PV is determined. Dashed lines in Figure 5 show how r is repeatedly transformed for packet flow 3. The local index Z at each rate transformation node 64 is determined based on the input port of the node at which r arrives. Note that the numbering of the input ports Z is the same as for the HQoS scheduler 50. The marker node 66 marks the packet with a PV based on the final transformed rate output the last marker component, e.g., rate transformation node.
Figures 6 - 10 illustrates exemplary pseudo code for implementing a packet marker using a marker graph 60 as herein described. The variables used have the following meanings:
n is a number of flows;
• G(V,E) is the marker graph 60;
5, is the rate measurement for the Zth packet flow at the starting node;
m is an index for the nodes in the marker graph 60;
• / is a local index for the input ports at each rate transformation node 64 in the marker graph 60;
m,l indicates the mth node of the graph, reached via the Zth input port;
adj is a function to determine the next node of the graph (i.e., the outgoing edge);
apply is the application of the rate transformation function; and
• / is a function to determine PV based on the aggregate TVF;
• S, is the rate measurement for the TA at the Zth input port of a rate transformation node 64; j is a region in range of throughput values from
Figure imgf000011_0001
Bj indicates the starting point of region j ; Sj indicates the width of region j ;
• R is a rate determination matrix;
• W is a normalized weight matrix; and
o is a reordering vector indicating a rank of the subflows at a node m in the marker graph 60.
Processing begins at the source node 62, denoted 5 (line 2). A random rate r is selected for a packet arriving at the source node 62 (line 3) and the next node in the path is computed based on the packet flow index i or other packet flow ID (line 4). Lines 5-7 describe the rate transformations performed as rate r is propagated through the marker graph 60 along the selected path. At each rate transformation node 64, the input rate r is adjusted based on the input port l at which the input rate rin was received (line 6). The next node in the path is then calculated
(line 7). The processing represented by lines 6 and 7 is repeated at each rate transformation node 64 until the marker node 66 is reached. Once the marker node 66 is reached, the final transformed rate r output by the last rate transformation node 64 and aggregate TFV is used to determine the PV (line 8).
Code listing 2, shown in Figures 7, illustrates an exemplary apply function for a rate transformation node 64 implementing SP scheduling for the packet flows flowing through that node. A rate transformation node 64 implementing SP allocates resources to subflows based on absolute priority. The rate measurement node takes rin as an input and returns a transformed rate rout as output. Generally, the rate transformation node 64 adds a rate offset L, 1 to the input rate rin to yield the output rate rout , where / is the local index of the subflow in priority order. The rate transformation nodes 64 is configured with a vector of rate offsets
Figure imgf000012_0001
for each subflow traversing that node. The rate offsets constitute state information representing a current state of the rate transformation node 64. This state information is updated periodically as herein described based on measured rates 5, of the incoming packet flows.
Code listing 3, shown in Figure 8, illustrates an exemplary apply function for a rate transformation node 64 implementing WFQ scheduling for the packet flows flowing through that node. A rate transformation node 64 implementing WFQ allocates resources to subflows taking part in the scheduling according to the weights of the subflows. The rate transformation node 64 takes a local TA index / and a rate rin as inputs and returns a transformed rate rout as output.
The TA index / is a local index that identifies the input port on which the rate rin is received. The TA index / is in the range [1, «] where n is the number of TAs flowing through the rate transformation node 64. The transformed rate rout is in the range
Figure imgf000012_0002
In one implementation, the range of possible values of rout is divided into n regions equal to the number of TAs flowing through the rate transformation node 64. Each region j is associated with a starting value B l . The rate transformation node 64 is configured with a reordering vector o , a region determination matrix R and a normalized weight matrix . W . The reordering vector comprises a vector of the TA indices l in ranked order based on a ratio Sl/wl , where St is the measured rate of the sublfow / and wt is a weight assigned by the operator. The region determination matrix R comprise a matrix used to map the input rate rin to corresponding values in a region j in the possible range of throughput values. Each element Rj t indicates when region j is used for sublfow on whereo, indicates a subflow / at the / th position in the reordering vector. In this case, the index i indicates a rank of the subflow. The normalized weight matrix W provides normalized weights for each sublfow ot in each region j . Each element Wj t is a normalized weight for sublfow oi in region j . The reordering vector o , region determination matrix R and a normalized weight matrix W comprise state information representing a current state of the rate transformation node 64. This state information is updated periodically based on measured rates 5, of the packet flows as hereinafter described.
When a packet arrives at an input port /, the rate transformation node 64 determines the rank i of the sublfow / based on the reordering vector (line 2). The rate transformation node 64 then maps the input rate rin to a region j based on the position i of the subflow in the reordering vector (line 3). Based on the mapping, the rate transformation node 64 computes an adjusted rate rout (line 4). More particularly, the rate transformation node 64 subtracts Rj- from the input rate rin and divides the result by Whi to get a weighted rate, which is then added to B which is the starting throughput value for the region j .
As noted above, the state information for the rate transformation nodes 64 is updated periodically based on rate measurements [Sj,S2,...S for the packet flows. The rate measurements [Sj,S2,...S are not propagated with every packet to the rest of marker graph 60, but rather are used to update the marker states of the rate transformation nodes 64. During the update of marker states, each St is propagated along a respective path of the marker graph 60.
Each rate transformation node 64 treats the incoming S, according to the local index of the input port and propagates the sum of all incoming St s at its outgoing edge. Each rate transformation node 64 updates its internal state based on the S, s received on its respective inputs. Afterward, each node summarizes the incoming rate measurement and propagates the sum on its outgoing edge to the next node. A node only performs state update, when all the rate measurements are available in its input ports.
This update can be implemented in an independent control thread/control node. The marker graph 60 is preconfigured to that node. Periodically the node reads the rate measurements, calculates the updated internal state of the marker graph 60 nodes and updates them in a coordinated manner.
Code listing 4, shown in Figure 9, illustrates an update procedure for updating the rate offsets used by a rate transformation node 64 to implement SP scheduling. The update routine receives subflow rate measurements [Sj,S2,...S for the TAs at each input and uses the subflow rate measurements to compute a vector of rate offsets [ , '··· -i]· Note that the subflow rate measurements are for the subflows at respective inputs and are received from a preceding node. The rate offsets are ordered in priority order of the subflows. The rate offset
Figure imgf000014_0001
for the highest priority flow is set to 0 so that the output rate for the highest priority flow is equal to the input rate (line 2). For each subsequent rate offset, the rate measurement S, is added to the previously computed rate offset L,_ 1 to obtain the next rate offset L, (lines 3-4). The rate offsets ensure that the random rates assign to each sublfow are bounded to a specific range. For sublow 1 , the range is [0,Sj] , for subflow 2 the range is [50,S2] and so forth up to subflow n. For subflow n the range is [S^,Sn] .
Code listing 5, shown in Figure 10, illustrates exemplary processing for updating state information used by a rate transformation node 64 to implement SP scheduling. The update routine receives subflow rate measurements [Sj,S2,...S for the packet flows at each input. Note that the subflow rate measurements are for the subflows at each input port / to that node and are received from a preceding node. The rate measurement node calculates the reordering matrix o based on the subflow rate measurements [Sj,S2,...S and a weight vector w provided by the operator (Proc. 5, lines 1 and 2). As previously noted, the reordering vector o arranges the indices of the subflows in order based on a ratio Sl/wl , where S, is the measured rate of the sublfow / and wt is a corresponding weight wt assigned by the operator. The weight vector w may be configured as part of the resource sharing policy of the network. Accordingly, these weights may be changed when the network implements new resource sharing policies. That said, these weights are expected to generally change infrequently, if at all. The starting value B0 for the first of n regions is set (line 4). After initializing the rate determination matrix R (lines 5-6), the rate transformation node 64 calculates the normalized weight matrix W (lines 7-9). For each region j , the rate transformation matrix computes the width d and boundary Bj of the region j based on the normalized weights (lines 11-12). Additionally, the rate transformation node 64 calculates the rate determination matrix R (lines 13-14).
Based on the forgoing, it can be observed that the rate transformation nodes 64 are updated periodically or from time to time based on rate measurements St of the subflows received at each input port /. It receives rates rin from 0 to S, on each input port / and produces a rate rout from 0 to
Figure imgf000014_0002
at its output, where n is the number of subflows. If the rate transformation node 64 receives uniform random rates on its inputs, it produces uniform random rates in its outputs (assuming that packets are of equal size). If packets vary in size, each sample in the distribution can be weighted based on the size of the packet to which it belongs.
A simple example may help to illustrate the operation of the packet marker as herein described. This example illustrates operation of the marker graph 60 shown in Figure 5, which is based on the HQoS scheduler 50 in Figure 4. The scheduler implements a resource sharing policy for 6 packet flows using four scheduling queues denoted WFQ(1), WFQ(2), SP(3) and WFQ(4). The rates and relative weights for each packet flow arriving at the scheduler are shown in Table 1 below.
Figure imgf000015_0001
Flows 2 and 3 are connected to WFQ(1). Flows 4, 5 and 6 are connected to WFQ(2). Flow 1 and the output of WFQ(2) are connected to SP(3). The outputs of SP(3) and WFQ(2) are connected to WFQ(4). The weights assigned to the queues at each scheduler are shown in Table 2.
Figure imgf000015_0002
Figures 11A-11 D illustrates resource sharing based on the policy embodied in Tables 1 and 2. WFQ(1) divides the throughput range of [0, 5] into two regions as shown in Figure 11 A. Based on the ratios Sl/wl of the subflows computed from Table 1 , packet flow 2 is allocated resources in region 1 (R1) and packet flow 3 is allocated resources in both region 1 (R1) and region 2 (R2). In R1 , resources are shared between flows 2 and 3 at a ratio of 2 to 1 based on Table 2. In R2, all the resources are used by flow 3. The total rate for flow 2 is 2 Mbps. The total rate for flow 3 is 3 Mbps. WFQ(2) divides the throughput range of [0, 12] into three regions as shown in Figure 11 B. Based on the ratios Sl/wl of the subflows computed from Table 1 , packet flow 5 is allocated resources in region 1 (R1), packet flow 4 is allocated resources in regions 1 and 2 (R1 and R2), and packet flow 6 is allocated resources in all three regions. In R1 , resources are shared between flows 4, 5 and 6 in the ratio 2:1 :1 based on Table 2. In R2, resources are shared between flows 4 and 6 in the ratio 2:1 based on Table 2. In region 3 (R3), all the resources are used by flow 6. The total rate for flow 4 is 6 Mbps. The total rate for flow 5 is 2 Mbps. The total rate for flow 6 is 4 Mbps.
SP(3) divides the throughput range of [0, 10] into two regions as shown in Figure 11 C. In R1 , all resources are used by flow 1 , which is a priority flow. In region 2, all resources are used by the TA output from WFQ(1 ). The total rate for flow 1 is 5 Mbps. The total rate for combined flows 2 and 3 is also 5 Mbps.
Finally, WFQ(4) divides the throughput range of [0, 22] into two regions as shown in Figure 11 D. Based on the ratios Sl/wl of the subflows computed from Table 1 , the TA from WFQ(2) is allocated resources in region 1 (R1) and the TA from SP(3) is allocated resources in both regions. In R1 , resources are shared between the TA from SP(3) and the TA from WFQ(2) in a ratio of 1 :2 based on Table 2. In this region, the TA from SP(3) is allocated 6 Mbps and the TA from WFQ(2) is allocated 12 Mbps. The allocation for the TA from SP(3) is consumed primarily by flow 1 , which is a priority flow leaving only 1 Mbps for flows 2 and 3. The resources in R2 are allocated entirely to flows 2 and 3 in SP(3) so the total allocation to flow 2 is 2 Mbps and the total allocation to flow 3 is 3 Mbps.
Figure 12 illustrates an exemplary method 100 of packet marking for multiple hierarchical levels. The packet marker obtains an Aggregate Throughput Value Function (ATVF) that maps throughput values to PVs for a plurality of packet flows (block 110). The packet marker further obtains a marker graph 60 that encodes resource sharing policies for a HQoS hierarchy for the plurality of packet flows as sequences of rate transformations, wherein each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph 60 from a source node 62 through one or more rate transformation nodes 64 to a marker node 66 (block 120). The packet marker receives a packet associated with one of the packet flows (block 130) and marks the packet with a PV based on a selected path through the marker graph 60 and the ATVF (block 140).
In some embodiments of the method 100, marking the packet with a PV based on a selected path through the marker graph 60 and the ATVF comprises randomly selecting an initial rate for the packet, selecting a path corresponding to one of the sequences of rate transformations based on a flow identifier for the packet flow, applying the selected sequence of rate transformations to transform the initial rate to a transformed rate, and marking the packet with a PV determined based on the transformed rate and ATVF. In some embodiments of the method 100, applying the selected sequence of rate transformations to transform the initial rate to a transformed rate comprises, for each of one or more rate transformation nodes 64 in the selected path, receiving an input rate from a preceding node, wherein the preceding node comprises the source node 62 or a preceding rate transformation node 64, transforming the input rate to a transformed rate based on an input over which the input rate was received, and outputting the transformed rate to a succeeding node, wherein the succeeding node comprises a succeeding rate transformation node 64 or the marker node 66.
In some embodiments of the method 100, the initial rate is a rate selected randomly from a predetermined range determined based on the flow identifier.
In some embodiments of the method 100, the initial rate is a uniform random rate.
Some embodiments of the method further comprise periodically receiving rate measurements for the plurality of packet flows and, for each rate transformation node 64, periodically updating a rate transformation configuration for the rate transformation node 64 based on the rate measurements.
In some embodiments of the method 100, updating the rate transformation configuration comprises, for at least one rate transformation node 64, updating state information used by the rate transformation node 64 to input rates received on different inputs to the rate transformation node 64.
In some embodiments of the method 100, updating state information used by the rate transformation node 64 to transform input rates received on different inputs to the rate transformation node 64 comprises, for each input, computing a weight matrix and a rate determination matrix used for implementing weighted fair queuing
In some embodiments of the method 100, updating state information used by the rate transformation node 64 comprises, for at least one rate transformation node 64, updating rate offsets applied to input rates received on the inputs to the rate transformation node 64 based on priorities associated with inputs.
Figure 13 illustrates an exemplary network node 200 according to an embodiment configured for marking packets. The network node 200 comprises a TFV unit 210, a marker graph (MG) unit 220, a receiving unit 230, a marking unit 240 and an optional output unit 250. The various units 210-250 can be implemented by one or more microprocessors, microcontrollers, hardware circuits, software, or a combination thereof.
The ATVF unit 210 is configured to obtain an Aggregate Throughput Value Function (ATVF) that maps throughput values to PVs for a plurality of packet flows. The marker graph 60 (MG) unit 220 is configured to obtain a marker graph 60 that encodes resource sharing policies for a HQoS hierarchy for the plurality of packet flows as sequences of rate transformations, wherein each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph 60 from a source node 62 through one or more rate transformation nodes 64 to a marker node 66 (block 120). The receiving unit 230 is configured to receive a packet associated with one of the packet flows. The marking unit 240 is configured to mark the packet with a PV based on a selected path of the marker node 66 and the ATVF. The output unit 250, when present, is configured to output the marked packet.
In one embodiment, the marking unit 240 comprises a rate selection unit 260, a rate transformation unit 270 and a valuation unit 280. The rate selection unit 260 is configured to select a uniform random rate for the received packet as an input rate. The rate transformation unit 270 is configured to apply a sequence of rate transformations to the input rate to compute a transformed rate based on the HQoS policies embodied in the marker graph 60. The rate selection sequence is selected based on a flow identifier associated with the packet flow to which the packet belongs. This is equivalent to selecting a path through the marker graph 60. The packet valuation unit 280 is configured to determine a PV for the packet based on an aggregate TFV for all of the packet flows.
Other embodiments include a computing device 300 (e.g., network node) configured for packet marking. The computing device 300 may perform one, some, or all of the functions described above, depending on the embodiment. In one example, the computing device 300 is implemented according to the hardware illustrated in Figure 14. The example hardware of Figure 14 comprises processing circuitry 320, memory circuitry 330, and interface circuitry 310. The processing circuitry 320 is communicatively coupled to the memory circuitry 330 and the interface circuitry 310, e.g., via one or more buses. The processing circuitry 320 may comprise one or more microprocessors, microcontrollers, hardware circuits, discrete logic circuits, hardware registers, digital signal processors (DSPs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), or a combination thereof. For example, the processing circuitry 320 may comprise a first processing circuit and a second processing circuit that are capable of executing functions in parallel.
The processing circuitry 320 may be programmable hardware capable of executing software instructions stored, e.g., as a machine-readable computer program 340 in the memory circuitry 330. The memory circuitry 330 may comprise any non-transitory machine-readable media known in the art or that may be developed, whether volatile or non-volatile, including but not limited to solid state media (e.g., SRAM, DRAM, DDRAM, ROM, PROM, EPROM, flash memory, solid state drive, etc.), removable storage devices (e.g., Secure Digital (SD) card, miniSD card, microSD card, memory stick, thumb-drive, USB flash drive, ROM cartridge, Universal Media Disc), fixed drive (e.g., magnetic hard disk drive), or the like, wholly or in any combination.
The interface circuitry 310 may be a controller hub configured to control the input and output (I/O) data paths of the computing device 300. Such I/O data paths may include data paths for exchanging signals over a communications network. For example, the interface circuitry 310 may comprise one or more transceivers configured to send and receive communication signals over one or more packet-switched networks, cellular networks, and/or optical networks.
The interface circuitry 310 may be implemented as a unitary physical component, or as a plurality of physical components that are contiguously or separately arranged, any of which may be communicatively coupled to any other, or may communicate with any other via the processing circuitry 320. For example, the interface circuitry 310 may comprise output circuitry (e.g., transmitter circuitry configured to send communication signals over a communications network) and input circuitry (e.g., receiver circuitry configured to receive communication signals over the communications network). Other examples, permutations, and arrangements of the above and their equivalents will be readily apparent to those of ordinary skill.
According to embodiments of the hardware illustrated in Figure 14, the processing circuitry 320 is configured to perform the method 200 illustrated in Figure 12, the method 100 illustrated in Figure 11.
Those skilled in the art will also appreciate that embodiments herein further include corresponding computer programs. A computer program comprises instructions which, when executed on at least one processor of an apparatus, cause the apparatus to carry out any of the respective processing described above. A computer program in this regard may comprise one or more code modules corresponding to the means or units described above.
Embodiments further include a carrier containing such a computer program. This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
In this regard, embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above.
Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device. This computer program product may be stored on a computer readable recording medium.
The present invention may, of course, be carried out in other ways than those specifically set forth herein without departing from essential characteristics of the invention. The present embodiments are to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.
Packet marking based on the marker graph 60 can be performed fast. After marking, packets can be put into a scheduler and the scheduler can be implemented as a simple PPV scheduler. No modifications to the scheduler are required to implement HQoS. Scheduling can be performed independent of the number of flow and without knowledge of the HQoS hierarchy or resource sharing policies at the scheduler.
Packet marking based on the marker graph 60 encodes the entire HQoS hierarchy into a single PV. HQoS policy is determined by the TVF. Packet marking as herein described is independent per TVF and can be parallelized.
Packet marking as herein described can be combined with the remarker solution in Resource Sharing in a Virtual Networking setting both in the input side (by changing how r is determined based on the incoming PV) and the output side (by using the calculated PV for the remarker). With this approach, a complex HQoS hierarchy (e.g., slicing) can be decomposed and the implementation of the whole HQoS hierarchy can be optimized based on processing capabilities and information availability.

Claims

CLAIMS What is claimed is:
1 . A method (100) of marking packets, the method comprising: obtaining (110) an Aggregate Throughput Value Function (ATVF) that maps throughput values to packet values for a plurality of packet flows; obtaining (120) a marker graph (60) that encodes resource sharing policies for a
Hierarchical Quality of Service (HQoS) hierarchy for the plurality of packet flows as sequences of rate transformations, wherein each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph (60) from a source node through one or more rate transformation nodes to a marker node; receiving (130) a packet associated with one of said packet flows; and marking (140) the packet with a packet value based on a selected path through the marker graph (60) and the ATVF.
2. The method (100) of claim 1 , wherein marking the packet with a packet value based on a selected path through the marker graph (60) and the ATVF comprises: randomly selecting an initial rate for the packet; selecting, based on a flow identifier for the packet flow, a path corresponding to one of said sequences of rate transformations; applying the selected sequence of rate transformations to transform the initial rate to a transformed rate; and marking the packet with a packet value determined based on the transformed rate and ATVF.
3. The method (100) of claim 2, wherein applying the selected sequence of rate transformations to transform the initial rate to a transformed rate comprises, for each of one or more rate transformation nodes in the selected path: receiving an input rate from a preceding node, wherein the preceding node comprises the source node or a preceding rate transformation node; transforming the input rate to a transformed rate based on an input over which the input rate was received; and outputting the transformed rate to a succeeding node, wherein the succeeding node comprises a succeeding rate transformation node or the marker node.
4. The method (100) of any one of claims 1 - 3, wherein the initial rate is a rate selected randomly from a predetermined range determined based on the flow identifier.
5. The method (100) of any one of claims 1 - 4, wherein the initial rate is a uniform random rate.
6. The method (100) of any one of claims 1 -5, further comprising: periodically receiving rate measurements for the plurality of packet flows; for each rate transformation node, periodically updating a rate transformation configuration for the rate transformation node based on the rate measurements.
7. The method (100) of claim 6, wherein updating the rate transformation configuration comprises, for at least one rate transformation node, updating state information used by the rate transformation node to transform input rates received on different inputs to the rate transformation node.
8. The method (100) of claim 7, wherein updating state information used by the rate transformation node to transform input rates received on different inputs to the rate transformation node comprises, for each input, computing a weight matrix and a rate determination matrix used for implementing weighted fair queuing.
9. The method (100) of claim 6, wherein updating the state information used by the rate transformation node to transform input rates received on different inputs to the rate transformation node comprises, for at least one rate transformation node, updating rate offsets applied to input rates received on the inputs to the rate transformation node based on priorities associated with inputs.
10. A network node (200, 300) configured for marking packets with a packet value, the network node being configured to: obtain an Aggregate Throughput Value Function (ATVF) that maps throughput values to packet values for a plurality of packet flows; obtain a marker graph (60) that encodes resource sharing policies for a Hierarchical Quality of Service (HQoS) hierarchy for the plurality of packet flows as sequences of rate transformations, wherein each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph (60) from a source node through one or more rate transformation nodes to a marker node; receive a packet associated with one of said packet flows; and mark the packet with a packet value based on a selected path through the marker graph (60) and the ATVF.
11. The packet marker of claim 10 further configured to perform the method of any one of claim 2 - 9.
12. A computing device (300) configured for marking packets with a packet value, the network node comprising: interface circuitry (310) for sending and receiving packets in one or more packet flows; and processing circuitry (320) configured to: obtain an Aggregate Throughput Value Function (ATVF) that maps throughput values to packet values for a plurality of packet flows; obtain a marker graph (60) that encodes resource sharing policies for a Hierarchical Quality of Service (HQoS) hierarchy for the plurality of packet flows as sequences of rate transformations, wherein each sequence of rate transformations corresponds to a path of one of said packet flows through the marker graph (60) from a source node through one or more rate transformation nodes to a marker node; receive a packet associated with one of said packet flows; and mark the packet with a packet value based on a selected path through the marker graph (60) and the ATVF.
13. The computing device (300) of claim 12, wherein the processing circuitry (320) is further configured to perform the method of any one of claim 2 - 9.
14. A computer program (340) comprising executable instructions that, when executed by a processing circuitry (320) in a network node (300), causes the network node (300) to perform any one of the methods of embodiments 1 - 9.
15. A carrier containing a computer program (340) of claim 14, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
PCT/IB2021/052666 2021-03-30 2021-03-30 Marker graph for hqos WO2022208135A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/IB2021/052666 WO2022208135A1 (en) 2021-03-30 2021-03-30 Marker graph for hqos
EP21717544.7A EP4315793A1 (en) 2021-03-30 2021-03-30 Marker graph for hqos
US18/273,887 US20240098028A1 (en) 2021-03-30 2021-03-30 Marker Graph for HQoS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2021/052666 WO2022208135A1 (en) 2021-03-30 2021-03-30 Marker graph for hqos

Publications (1)

Publication Number Publication Date
WO2022208135A1 true WO2022208135A1 (en) 2022-10-06

Family

ID=75439151

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/052666 WO2022208135A1 (en) 2021-03-30 2021-03-30 Marker graph for hqos

Country Status (3)

Country Link
US (1) US20240098028A1 (en)
EP (1) EP4315793A1 (en)
WO (1) WO2022208135A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200382427A1 (en) * 2018-01-22 2020-12-03 Telefonaktiebolaget Lm Ericsson (Publ) Probabilistic Packet Marking with Fast Adaptation Mechanisms

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200382427A1 (en) * 2018-01-22 2020-12-03 Telefonaktiebolaget Lm Ericsson (Publ) Probabilistic Packet Marking with Fast Adaptation Mechanisms

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LAKI SANDOR ET AL: "Core-Stateless Forwarding With QoS Revisited: Decoupling Delay and Bandwidth Requirements", IEEE /ACM TRANSACTIONS ON NETWORKING, IEEE / ACM, NEW YORK, NY, US, vol. 29, no. 2, 9 December 2020 (2020-12-09), pages 503 - 516, XP011850055, ISSN: 1063-6692, [retrieved on 20210415], DOI: 10.1109/TNET.2020.3041235 *

Also Published As

Publication number Publication date
US20240098028A1 (en) 2024-03-21
EP4315793A1 (en) 2024-02-07

Similar Documents

Publication Publication Date Title
US7983299B1 (en) Weight-based bandwidth allocation for network traffic
JP3654499B2 (en) Communication network design method and apparatus
US6295294B1 (en) Technique for limiting network congestion
US8098583B2 (en) Network having multiple QoS levels
US20070206602A1 (en) Methods, systems and apparatus for managing differentiated service classes
US10616126B2 (en) Virtual CCAP downstream traffic scheduling
US20040213264A1 (en) Service class and destination dominance traffic management
US8897292B2 (en) Low pass filter for hierarchical pipelined distributed scheduling traffic manager
US20080317045A1 (en) Method and System for Providing Differentiated Service
Porxas et al. QoS-aware virtualization-enabled routing in software-defined networks
Zolfaghari et al. Queue-aware channel-adapted scheduling and congestion control for best-effort services in LTE networks
Fejes et al. DeepQoS: Core-Stateless Hierarchical QoS in Programmable Switches
US10382582B1 (en) Hierarchical network traffic scheduling using dynamic node weighting
US20240098028A1 (en) Marker Graph for HQoS
Tsunekawa Fair bandwidth allocation among LSPs for AF class accommodating TCP and UDP traffic in a Diffserv-capable MPLS network
Banchs et al. A scalable share differentiation architecture for elastic and real-time traffic
Kim et al. Weighted round robin packet scheduler using relative service share
EP4315764A1 (en) Fair hierarchical quality of service marking
EP4307641A1 (en) Guaranteed-latency networking
US20240106758A1 (en) Simple Hierarchical Quality of Service (HQoS) Marking
Asaduzzaman et al. Improving Quality of Service in Computer Networks Applying the Eight Classes of Service
KR100794367B1 (en) Virtual Networking Method using Diffserv-over MPLS TE
Chen QoS and Over-subscription for IP/MPLS Networks
Bolla et al. Analytical/simulation optimization system for access control and bandwidth allocation in IP networks with QoS
Osali et al. MEEAC: An enhanced scheme for supporting QoS granularity by multipath explicit endpoint admission control

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 18273887

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2021717544

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021717544

Country of ref document: EP

Effective date: 20231030