WO2018082788A1 - Efficient handling of loss and/or delay sensitive sporadic data traffic - Google Patents

Efficient handling of loss and/or delay sensitive sporadic data traffic Download PDF

Info

Publication number
WO2018082788A1
WO2018082788A1 PCT/EP2016/076823 EP2016076823W WO2018082788A1 WO 2018082788 A1 WO2018082788 A1 WO 2018082788A1 EP 2016076823 W EP2016076823 W EP 2016076823W WO 2018082788 A1 WO2018082788 A1 WO 2018082788A1
Authority
WO
WIPO (PCT)
Prior art keywords
data packets
node
classes
data
guaranteed
Prior art date
Application number
PCT/EP2016/076823
Other languages
French (fr)
Inventor
Szilveszter NÁDAS
János FARKAS
Balázs VARGA
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2016/076823 priority Critical patent/WO2018082788A1/en
Publication of WO2018082788A1 publication Critical patent/WO2018082788A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/525Queue scheduling by attributing bandwidth to queues by redistribution of residual bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/527Quantum based scheduling, e.g. credit or deficit based scheduling or token bank
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority

Definitions

  • the present invention relates to methods for handling data traffic in a data network and to corresponding devices and systems.
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • different kinds data traffic may differ with respect to their sensitivity concerning delay which occurs while data packets of the data traffic are forwarded through the communication network, e.g., in terms of a per-hop delay or an end-to-end delay.
  • the delay of the data packets is typically not very relevant.
  • realtime data transfers such as multimedia streaming
  • excessive delay of a data packet may adversely impact the user experience because typically data packets need to be available at the receiver at a certain time, and later received data packets are useless.
  • certain types of traffic may also be loss-sensitive, so that it may be desirable to control the forwarding of the data packets in such a way that dropping of data packets in avoided as far as possible.
  • a method of handling data traffic in a data network classifies data packets into multiple different classes. Further, the node determines those of the classes which have zero traffic load. Depending on the classes which have zero traffic load, the node decides for at least one of the other classes between transmitting the data packets of the class as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit and transmitting the data packets as non- guaranteed data packets which are not subject to the guarantee. According to a further embodiment of the invention, a method of handling data traffic in a data network is provided.
  • a node of the data network determines multiple different classes of data packets. Further, the node determines multiple different groups of the classes for which the data packets are to be treated as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit. For each of the groups, the node determines, based on a worst case calculation of a delay experienced by a data packet, a configuration of a resource contingent for transmission of the guaranteed data packets. The configuration defines a maximum amount of resources which corresponds to at least a minimum amount of resources required to meet the guarantee.
  • a node for a data network is provided.
  • the node is configured to classify data packets into multiple different classes. Further, the node is configured to determine those of the classes which have zero traffic load. Further, the node is configured to, depending on the classes which have zero traffic load, decide for at least one of the other classes between transmitting the data packets of the class as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit and transmitting the data packets as non- guaranteed data packets which are not subject to the guarantee.
  • a node for a data network is provided. The node is configured to determine multiple different classes of data packets.
  • the node is configured to determine multiple different groups of the classes for which the data packets are to be treated as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit. For each of the groups, the node is configured to determine, based on a worst case calculation of a delay experienced by a data packet, a configuration of a resource contingent for transmission of the guaranteed data packets. The configuration defines a maximum amount of resources which corresponds to at least a minimum amount of resources required to meet the guarantee.
  • a computer program or computer program product is provided, e.g., in the form of a non-transitory storage medium, which comprises program code to be executed by at least one processor of an node for a data network. Execution of the program code causes the node to classify data packets into multiple different classes. Further, execution of the program code causes the node to determine those of the classes which have zero traffic load.
  • execution of the program code causes the node to, depending on the classes which have zero traffic load, decide for at least one of the other classes between transmitting the data packets of the class as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit and transmitting the data packets as non- guaranteed data packets which are not subject to the guarantee.
  • a computer program or computer program product is provided, e.g., in the form of a non-transitory storage medium, which comprises program code to be executed by at least one processor of an node for a data network. Execution of the program code causes the node to determine multiple different classes of data packets. Further, execution of the program code causes the node to determine multiple different groups of the classes for which the data packets are to be treated as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit.
  • execution of the program code causes the node to determine, based on a worst case calculation of a delay experienced by a data packet, a configuration of a resource contingent for transmission of the guaranteed data packets.
  • the configuration defines a maximum amount of resources which corresponds to at least a minimum amount of resources required to meet the guarantee. Details of such embodiments and further embodiments will be apparent from the following detailed description of embodiments.
  • FIG. 1 schematically illustrates a scenario in which data traffic is handled according to an embodiment of the invention.
  • Fig. 2 schematically an example of a data packet as utilized in an embodiment of the invention.
  • Fig. 3 schematically illustrates an architecture of a scheduler according to an embodiment of the invention.
  • Fig. 4 schematically illustrates a set of token buckets as utilized according to an embodiment of the invention.
  • Fig. 5 shows an example of configuring a token bucket according to an embodiment of the invention.
  • Fig. 6 shows a flowchart for schematically illustrating a method of controlling forwarding of data packets according to an embodiment of the invention.
  • Fig. 7 shows a flowchart for schematically illustrating a method of controlling treatment of traffic classes according to an embodiment of the invention.
  • Fig. 8 shows a method of determining resource contingents according to an embodiment of the invention.
  • Fig. 9 shows a method of handling data traffic according to an embodiment of the invention.
  • Fig. 10 shows a block diagram for illustrating functionalities of a network node according to an embodiment of the invention.
  • Fig. 1 1 shows a method of handling data traffic according to an embodiment of the invention.
  • Fig. 12 shows a block diagram for illustrating functionalities of a network node according to an embodiment of the invention.
  • Fig. 13 schematically illustrates structures of a network node according to an embodiment of the invention.
  • the illustrated concepts relate to handling data traffic in a data network. Specifically, the concepts relate to controlling forwarding of data packets of the data traffic by a node of such data network.
  • the data network may for example be part of a communication network.
  • a communication network is a wireless communication network, e.g., based on GSM (Global System for Mobile Communication), UMTS (Universal Mobile Telecommunications System), or LTE (Long Term Evolution) technologies specified by 3GPP (3 rd Generation Partnership Project).
  • GSM Global System for Mobile Communication
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • 3GPP 3 rd Generation Partnership Project
  • the data packets may be IP data packets, optionally in connection with further protocols, e.g., an Ethernet framing protocol, TCP, UDP (User Datagram Protocol), or a tunneling protocol such as GTP (General Packet Radio Service Tunneling Protocol).
  • further protocols e.g., an Ethernet framing protocol, TCP, UDP (User Datagram Protocol), or a tunneling protocol such as GTP (General Packet Radio Service Tunneling Protocol).
  • network nodes e.g., a one or more switches, routers, or gateways
  • Fig. 1 An example of a corresponding scenario is illustrated in Fig. 1 , where data traffic provided by multiple traffic sources 1 10 is forwarded by a network node 120 to multiple traffic destinations 130.
  • Fig. 1 shows a management node 150 which may be responsible for configuring the nodes responsible for transmission of the data traffic, such as the traffic sources 1 10 or the node 120.
  • management node 150 may be responsible for dimensioning the capacity of links used for transmission of the data traffic, e.g., by configuring resource contingents.
  • Fig. 1 An example of a corresponding scenario is illustrated in Fig. 1 , where data traffic provided by multiple traffic sources 1 10 is forwarded by a network node 120 to multiple traffic destinations 130.
  • Fig. 1 shows a management node 150 which may be responsible for configuring the nodes responsible for transmission of the data traffic, such as the traffic sources 1 10 or the node 120.
  • management node 150 may be responsible
  • each pair of a certain traffic source 1 10 and a certain traffic destination may define a flow.
  • the packets of the same flow may carry the same source address and the same destination address.
  • the data traffic is assumed to include data traffic which is sporadic, i.e., has a variable traffic load with time intervals of more than a certain minimum duration where the traffic load is zero.
  • the minimum duration may be in the range of 1 ms to 1 s, although this value may differ between different types of sporadic traffic. In some cases the sporadic data traffic can very sparse.
  • An extreme case would be data traffic which occurs only once in a lifetime of a certain device, e.g., data traffic related to deployment of an airbag.
  • the sporadic data traffic could have the form of short bursts.
  • Such bursts may have various lengths and occur at various intervals, e.g., bursts of some seconds to minutes which typically occur about once in a week to once in a day, bursts of some seconds which occur about once in an hour to once in a day, or bursts of some milliseconds which occur about once in a second to once in a minute.
  • Data traffic which is not sporadic will in the following also be referred to as continuous data traffic.
  • the sporadic data traffic may for example be distinguished on flow level from other data traffic. That is to say, by considering the transmission timing of the data packets of a certain flow, a flow can be categorized as either carrying sporadic data traffic or as carrying continuous data traffic.
  • the sporadic traffic could also be defined on other levels, such as on the level of traffic type, e.g., by considering one or more underlying protocols of the data traffic, or on the level of service types, e.g., by considering the category of a service which generates the data traffic or even the individual service which generates the data traffic.
  • the sporadic data traffic could be a flow associated with a certain traffic type and/or service type.
  • the data traffic may also be subject to various requirements concerning the loss of data packets or delays of data packets.
  • loss of data packets or excessive delay of data packets may be critical. For example, this could be the case if the data traffic conveys realtime control information for a robotic system.
  • loss of data packets or excessive delay of data packets may be less critical, e.g., for data traffic related to audio or video streaming.
  • there may be data traffic which is sporadic and sensitive to loss of data packets or excessive delay of data packets e.g., data traffic carrying alarm messages by a robotic system. In the examples as further explained below, it is assumed that the data traffic can be classified in terms of its criticality of loss or excessive delay of data packets.
  • the data traffic may be subject to different treatment.
  • data packets of the data traffic may be treated as either guaranteed data packets or non-guaranteed data packets.
  • the guaranteed data packets are subject to a guarantee that the data packet forwarded by the node is not delayed by more than a certain delay limit and not dropped. This enables to meet the requirements of critical data traffic.
  • the guarantee would need to be met on all links included in a multi- hop path, e.g., on a link from the traffic source 1 10 to the node 120 and on a link from the node 120 to the traffic destination 130.
  • this guarantee does not apply, i.e., the non-guaranteed data packets may be dropped. Accordingly, the non-guaranteed data packets may be transmitted in a resource efficient manner.
  • the non-guaranteed data packets may nonetheless benefit from the delay limit as defined for the guaranteed data packets. That is to say, if a non-guaranteed data packet is not dropped, it could nonetheless be transmitted within the same delay limit as guaranteed for the guaranteed data packets.
  • the transmission of the data traffic by the network nodes may be managed by a respective scheduler of the node.
  • Fig. 1 illustrates a scheduler 125 of the node 120.
  • the scheduler 125 manages forwarding of the data traffic by the node 120.
  • other nodes which transmit the data traffic e.g., the traffic sources 1 10 or intermediate nodes between the traffic sources 1 10 and the node 120 or intermediate nodes between the node 120 and the traffic destinations 130 may be provided with such a scheduler.
  • the scheduler operates on the basis of a scheduling algorithm which enables to meet the guarantee for the guaranteed data packets.
  • the scheduling algorithm reserves one or more resource contingents which are filled with sufficient resources to meet the guarantee.
  • the resource contingent(s) may be managed on the basis of one or more token buckets and a filling rate of the token bucket(s) and size of the token bucket(s) be set in such a way that the guarantee is met. This is accomplished on the basis of a worst case calculation for the delay experienced by a transmitted data packet.
  • the worst case calculation may be based on known, estimated, or measured characteristics of the data traffic transmitted by the node, e.g., data rates, maximum size, or burstiness of the data traffic.
  • data rates e.g., data rates, maximum size, or burstiness of the data traffic.
  • the maximum size of the resource contingent should be limited because the worst case delay is found to be minimal when the maximum size of the resource contingent is equal to the minimum amount of resources required to meet the guarantee and increases with the maximum size of the resource contingent. This can be attributed to an overall limitation of the available resources. For example, if the amount of reserved resources increases, this means that transmission of more data packets over the same bottleneck (e.g., an interface with limited capacity) is admitted, which typically results in increased worst case delay for transmission of data packets over this bottleneck.
  • the worst case delay calculation may be based on various models or calculation methods.
  • An example of how the worst case delay can be calculated in a scenario in which multiple token buckets are used for providing a delay guarantee for multiple flows is given in "Urgency-Based Scheduler for Time-Sensitive Switched Ethernet Networks" by J. Specht and Soheil Samii, 28th Euromicro Conference on Real-Time Systems (ECRTS), Toulouse, France, July 5-8, 2016.
  • the maximum size of the resource contingent(s) may be set to be larger than a minimum size needed to meet the guarantee. This increased size of the resource contingent(s) is considered in the worst-case delay calculation.
  • the resource contingent may include resources in excess of the minimum amount required to meet the guarantee, in the following referred to as excess resources.
  • excess resources may also be present due to the characteristics of the data traffic which is subject to the guarantee. In particular, if the data traffic is sporadic, it will typically have peak traffic loads which are much higher than the average traffic load. Since the worst case calculation typically needs to consider situations where the peak traffic load is present, the dimensioning of the resource contingent based on the worst case calculation will have the effect that excess resources are present under average traffic load.
  • the scheduler may use any excess resources for forwarding non-guaranteed data packets. In particular, if sufficient excess resources are present, the scheduler may use these excess resources for forwarding one or more non-guaranteed data packet. If no sufficient excess resources are present, the scheduler may decide to drop the non-guaranteed data packet. Accordingly, resources not used for forwarding the guaranteed data packets can be efficiently used for forwarding the non-guaranteed data packets.
  • Fig. 1 the elements of Fig. 1 are merely illustrative and that the data network may include additional nodes and that various connection topologies are possible in such data communication network.
  • additional nodes could be intermediate nodes between the traffic sources 1 10 and the network node 120 or intermediate nodes between the network node 120 and the traffic destinations 130.
  • Such intermediate nodes could forward the data traffic in a similar manner as explained for the network node 120.
  • the distinction between the guaranteed data packets and the non-guaranteed data packets may be based on a marking of the data packets with a value, in the following referred to as packet value. An example of how this may be accomplished is illustrated in Fig. 2.
  • Fig. 2 shows an exemplary data packet 200.
  • the data packet may be an IP data packet.
  • the data packet 200 may be based on various kinds of additional protocols, e.g., a transport protocol such as TCP or UDP, or a tunnelling protocol such as GTP.
  • the data packet 200 includes a header section 210 and a payload section 220.
  • the payload section 220 may for example include user data.
  • the payload section 220 may also include one or more encapsulated data packets.
  • the header section 210 typically includes various kinds of information which is needed for propagation of the data packet 200 through the data communication system. For example, such information may include a destination address and/or source address.
  • the header section 210 includes a label 212 indicating the PPV.
  • the label 212 may include a scalar value, e.g., in the range of 0 to 255, to indicate the PPV.
  • the label 214 may for example be included in a corresponding information fields in the header section 210.
  • a corresponding information field may be defined for the above-mentioned protocols or one or more existing information fields may be reused.
  • the header section may also include a delay indicator 214.
  • the delay indicator may for example be used for determining a delay class of the data packet. Different delay limits may be defined depending on the delay indicator.
  • the PPV may represent a level of importance of the data packet, e.g., in terms of a network- level gain when the data packet is delivered. Accordingly, nodes of the data network, including the network node 120, should aim at utilizing their available resources to maximize the total PPV of the successfully transmitted data packets.
  • the PPV may be considered in relation to the number of bits in the data packet, i.e., the value included in the label 212 may be treated as a value per bit, which enables direct comparison of data packets of different sizes. Accordingly, for the same marking in the label 212, a data larger packet may would have a higher PPV than a smaller data packet. On the other hand, transmission of the larger data packet requires more resources than transmission of the shorter data packet.
  • the PPV may be set by an operator of the data network according to various criteria, e.g., by assigning a higher PPV to data traffic of premium users or emergency traffic. Accordingly, the PPV may be used to express the importance of data packets relative to each other, which in turn may be utilized by the nodes 1 10, 120 (or other nodes of the data network) for controlling how to utilize their available resources for transmission the data packets, e.g., by using resources for forwarding a data packet with high PPV at the expense of a data packet with low PPV, which may then be delayed or even dropped.
  • a threshold may be defined. Based on a comparison of the PPV to the threshold, the network node 120 can decide whether the data packet is a guaranteed data packet or a non-guaranteed data packet. In particular, if for a given data packet the PPV exceeds the threshold the network node 120 may treat the data packet as a guaranteed data packet. Otherwise, the network node 120 may treat the data packet as a non-guaranteed data packet.
  • the delay indicator 214 may for example indicate a maximum delay the data packet may experience when being forwarded, e.g., in terms of a per-hop delay or in terms of an end- to-end delay. This information may then be applied by the network node 120 for setting the above-mentioned delay limit of the guarantee.
  • the scheduler may thus operate by providing the guarantee with respect to loss and delay and using the PPV for deciding whether a certain data packet is to be treated as a guaranteed data packet or as a non-guaranteed data packet.
  • the guaranteed data packets e.g., the data packets for which the PPV is above the threshold, may then be subjected to traffic shaping.
  • the non-guaranteed packets may be filtered, either before being stored in a queue or when being output from the queue.
  • Fig. 3 schematically illustrates an exemplary architecture which may be used for implementing the above-described loss and delay guaranteed transmission of the data packets.
  • the architecture of Fig. 3 may be used to implement the above- mentioned scheduler.
  • the architecture of Fig. 3 includes an input filter 310 in which the data packets 200 received by the network node 120 subjected to input filtering. Further, the architecture includes a queue 320 for temporarily storing the data packets 200 passed through the input filter 310, and an interleaved shaper 330 for controlling forwarding of the data packets 200 from the queue 320. Further, the architecture provides a controller 340 which may be used for tuning operation of the input filter 310 and/or of the interleaved shaper 330.
  • the controller may collect statistics from the input filter 310, e.g., an average drop rate, from the queue 320, e.g., a queue length and/or an average queueing time, and/or from the interleaved shaper 330, e.g., an average drop rate or an average delay, and tune the operation of the input filter 310 and/or of the interleaved shaper 330 depending on the collected statistics.
  • statistics from the input filter 310 e.g., an average drop rate
  • the queue 320 e.g., a queue length and/or an average queueing time
  • the interleaved shaper 330 e.g., an average drop rate or an average delay
  • the input filtering by the input filter 310 involves determining for each of the data packets 200 whether the data packet 200 is a guaranteed data packet or a non-guaranteed data packet.
  • the input filter 310 passes the guaranteed data packets to a queue 320.
  • the input filter 310 can decide between dropping the non-guaranteed data packet 200 or passing the non-guaranteed data packet 200 to the queue 320. This may be accomplished depending on the PPV. Further, the input filter 310 may also decide depending on a resource contingent managed on the basis of a set 312 of one or more token buckets (TB) whether to drop the non-guaranteed data packet 200.
  • TB token buckets
  • a function f(PPV, L) may be applied for determining the number of tokens required to let the non-guaranteed data packet 200 pass.
  • the number of required tokens will typically increase with increasing size L of the packet, but decrease with increasing packet value V. Accordingly, non-guaranteed data packets 200 with higher packet value have a higher likelihood of being passed to the queue 320.
  • the controller 340 may tune parameters of the function f(PPV, L) depending on the statistics provided by the input filter 310, the queue 320, and/or the interleaved shaper 330.
  • the interleaved shaper 330 controls forwarding of the data packets 200 from the queue 320. This involves taking the first data packet 200 from the queue 320 and again determining for whether the data packet 200 is a guaranteed data packet or a non- guaranteed data packet. If the data packet 200 is a guaranteed data packet, it is forwarded by the interleaved shaper 330, without delaying it in excess of the delay limit. If the data packet 200 is a non-guaranteed data packet, the interleaved shaper 330 may decide between dropping the non-guaranteed data packet 200 or forwarding the non-guaranteed data packet 200.
  • the interleaved shaper 330 may utilize a resource contingent managed on the basis of a set 332 of one or more token buckets (TB) whether to drop a non-guaranteed data packet 200 and when to forward a guaranteed data packet 200. This may also consider the size of the data packet 200.
  • the interleaved shaper 330 forwards a guaranteed data packet 200 when there are sufficient tokens in a corresponding token bucket.
  • the interleaved shaper 330 forwards a non-guaranteed data packet 200 only if there are sufficient excess resources. This may be the case if a token bucket is filled beyond a minimum amount of tokens which is required to meet the delay guarantee.
  • the interleaved shaper 330 may forward the non-guaranteed data packet 200, using the excess tokens.
  • a function g(PPV, L) may be applied for determining the number of tokens required for forwarding a guaranteed or non-guaranteed data packet 200.
  • the number of required tokens will typically increase with increasing size L of the packet, but decrease with increasing PPV. Accordingly, non-guaranteed data packets 200 with higher PPV have a higher likelihood of being forwarded by the interleaved shaper 330.
  • the controller 340 may tune parameters of the function g(PPV, L) depending on the statistics provided by the input filter 310, the queue 320, and/or the interleaved shaper 330.
  • Fig. 4 shows an example of a set 400 of token buckets 410, 420, 430 which could be used in the input filter 310 and/or in the interleaved shaper.
  • the set 400 includes per flow buckets 410, denoted as TBi, where i is an index of the corresponding flow.
  • the per flow token buckets 410 include one token bucket 410 per flow of the data traffic transmitted by the network node.
  • the set includes a token bucket 420 for non- guaranteed traffic, denoted as TBng.
  • the token bucket 420 for non-guaranteed traffic may be used for managing resources which are dedicated to be used for forwarding the non- guaranteed data packets 200.
  • the token bucket 420 could be filled at a rate which corresponds to an estimated or desirable proportion of non-guaranteed data traffic to be served by the network node 120.
  • the token bucket 420 could also be used as an overflow for the per-flow token buckets 410.
  • excess tokens which do not fit into the per-flow token buckets 410 could be collected in the token bucket 420 and the token bucket 420 thus be utilized for managing the excess resources to be used for forwarding the non-guaranteed data traffic.
  • the set 400 includes an aggregate token bucket 430, denoted as TBag.
  • the token bucket 430 may for example be used for managing an overall resource contingent, which can for example be used as a basis for deciding whether the input filter 310 should drop a non-guaranteed data packet 200.
  • the transmission of the non-guaranteed data packets 200 may be based on the availability of sufficient excess resources, i.e., resources in excess of the minimum amount of resources to meet the guarantee for the guaranteed data packets 200.
  • sufficient excess resources i.e., resources in excess of the minimum amount of resources to meet the guarantee for the guaranteed data packets 200.
  • an extra space is added to the reserved resource contingent(s).
  • the maximum size of the reserved resource contingent(s) is set to be larger than actually required to meet the guarantee.
  • the increased size of the reserved resource contingent(s) is considered in the worst case calculation of the delay, thereby making sure that also with the increased size of the reserved resource contingent(s) the guarantee is still met.
  • the extra space of the resource contingent(s) can be provided by adding an extra space to one or more of the per-flow token buckets 410.
  • Fig. 5 shows a configuration of an exemplary token bucket.
  • the token bucket has a size B.
  • the size B is larger than a minimum size b t required to meet the guarantee.
  • the size B corresponds to the minimum size b t plus an extra space denoted by b ng .
  • the minimum size bi may for example depend on the expected burstiness and/or expected maximum size of the data packets 200. Due to the increased size, the token bucket can hold excess tokens, i.e., be filled to a level b t in excess of the minimum size bi.
  • the size of one or more of the per-flow token buckets 410 may be set to the minimum size b t required to meet the guarantee, and if these per-flow token buckets 410 are full, the overflowing tokens may be added to another token bucket, e.g., to the token bucket 420.
  • the token bucket 420 could otherwise be configured with a fill rate of zero, i.e., only be filled with the overflowing token buckets. The token bucket 420 could thus be exclusively used for collecting excess tokens (from one or more other token buckets).
  • the decision whether a non-guaranteed data packet 200 can be forwarded on the basis of the available excess resources may be based on the amount of excess token buckets.
  • the interleaved shaper 330 may decide to drop a non-guaranteed data packet 200 of size L unless the amount of excess tokens in one of the per-flow token buckets 410 exceeds the value g(PPV,L) for this data packet 200. If the amount of excess tokens in one of the per-flow token buckets 410 exceeds the value g(PPV,L), the interleaved shaper 330 may decide to forward the non-guaranteed data packet 200 using the excess tokens, i.e., taking an amount of tokens from the per-flow token bucket 410 which is given by g(PPV,L).
  • the interleaved shaper 330 may decide to drop a non-guaranteed data packet 200 of size L unless the amount of excess tokens in this token bucket exceeds the value g(PPV,L) for this data packet 200. If the amount of excess tokens in one of the per-flow token buckets 410 exceeds the value g(PPV,L), the interleaved shaper 330 may decide to forward the non-guaranteed data packet 200 using the excess tokens, i.e., taking an amount of tokens from the per-flow token bucket 410 which is given by g(PPV,L). Fig.
  • FIG. 6 shows a flowchart for illustrating a method of controlling forwarding of data packets in a data network.
  • the method of Fig. 6 may be utilized for implementing the transmission of the data packets on one of the redundant links, e.g., by one of the traffic sources 1 10 or by the node 120.
  • the method could be implemented by a scheduler of the node, such as the above-mentioned scheduler.
  • the steps of the method may be performed by one or more processors of the node.
  • the node may further comprise a memory in which program code for implementing the below described functionalities is stored.
  • the data packets to be transmitted by the node can each be a guaranteed data packet which is subject to a guarantee that the data packet is not dropped and not delayed by more than a certain delay limit or a non-guaranteed data packet which is not subject to the guarantee.
  • the transmission of the data packets on the redundant link is based on one or more resource contingents configured with a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee.
  • the maximum size is configured on the basis of a worst case calculation of a delay experienced by a data packet forwarded by the node.
  • the node assigns resources to the resource contingent and identifies resources in excess of the minimum amount as excess resources.
  • the maximums size of the resource contingent is configured based on a worst case calculation of a delay experienced by a data packet forwarded by the node.
  • the resource contingent may be managed on the basis of a token bucket, e.g., one of the above-mentioned token buckets 410, 420, 430.
  • the node may then assign resources to the resource contingent by adding tokens to the token bucket.
  • a size of the token bucket may then correspond to the maximum amount of resources of the resource contingent, e.g., as illustrated in the example of Fig. 5.
  • the token bucket may also be configured with a size corresponding to the minimum amount of resources required to meet the guarantee.
  • a further token bucket may be configured for the excess resources and the node may add tokens to the further token bucket only if the token bucket is full. In other words, the further token bucket may be used for receiving overflowing tokens from the token bucket.
  • the received data packets may be part of multiple flows.
  • a corresponding resource contingent with a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee may be configured for each of the flows.
  • the node may then assign resources to the corresponding resource contingent and identifies resources in excess of the minimum amount (in any of the different contingents) as excess resources.
  • the corresponding resource contingent for each of the flows may be managed on the basis of a corresponding token bucket, such as one of the above-mentioned per flow buckets 410.
  • the node may then assign resources to the corresponding resource contingent by adding tokens to the corresponding token bucket.
  • a size of the corresponding token bucket may corresponds to the maximum amount of resources of the corresponding resource contingent, e.g., as illustrated in the example of Fig. 5.
  • the corresponding token bucket for each of the flows may also be configured with a size corresponding to the minimum amount of resources required to meet the guarantee.
  • a further token bucket may be configured for the excess resources and the node may add tokens to the further token bucket only if one of the corresponding token buckets of the resource contingents for the flows is full.
  • the further token bucket may be used for receiving overflowing tokens from the other token buckets.
  • the above-mentioned token bucket 420 could be used for receiving overflowing token buckets from the above-mentioned per flow token buckets 410.
  • the node may get a data packet to be transmitted on the redundant link.
  • node may get the data packet, e.g., one of the above-mentioned data packets 200, from an interface with respect to another node of the data network or from a queue in which the data packet is temporarily stored.
  • the node determines whether the data packet is a guaranteed data packet which is subject to a guarantee that the data packet is not dropped and not delayed by more than a certain delay limit or a non-guaranteed data packet which is not subject to the guarantee.
  • the node determines whether the data packet is a guaranteed data packet, as indicated by branch "Y". If this is not the case, the node determines that the data packet is a non-guaranteed data packet, as indicated by branch "N".
  • each of the data packets received by the node is marked with a value, e.g., the above-mentioned PPV.
  • the node may then determine depending on the value whether the data packet is a guaranteed data packet or a non-guaranteed data packet, e.g., by comparing the value to a threshold. For example, if the value is above the threshold the node may determine that the data packet is a guaranteed data packet.
  • the determination of step 620 may also depend on a size of the data packet. For example, the value marking the packet may be treated as a value per bit of the data packet, i.e., the value could be proportional to the size of the data packet.
  • the node may proceed to step 630 and serve the guaranteed data packet. This may involve forwarding the guaranteed data packet based on the resources from the resource contingent. In some cases, the node may wait with the forwarding until sufficient resources are available in the resource contingent. If there are multiple flows with corresponding resource contingents, the node may forward the data packet based on based on the resources in the resource contingent corresponding to the flow the data packet is part of. From step 630, the node may return to step 610 to proceed with getting a next data packet.
  • the node may proceed to step 640 and check if sufficient excess resources are available for forwarding the non- guaranteed data packet.
  • each of the data packets received by the node is marked with a value, e.g., the above-mentioned PPV.
  • the node may then determine depending on the value whether the sufficient excess resources are present. For example, the above-mentioned function g(PPV,L) orf(PPV,L) may be applied to check whether sufficient excess tokens are available.
  • the node may proceed to step 650 and serve the non-guaranteed data packet by forwarding the non-guaranteed data packet based on the excess resources. Since in this case the non- guaranteed data packet can be forwarded without significant further delay, it is forwarded within the same delay limit as defined for the guaranteed data packet. Accordingly, even though the data packet is non-guaranteed, it may benefit from the guaranteed delay limit. From step 650, the node may return to step 610 to proceed with getting a next data packet.
  • the node may proceed to step 660 and drop the non-guaranteed data packet. From step 660, the node may return to step 610 to proceed with getting a next data packet.
  • each of the data packets received by the node is marked with a value, e.g., the above-mentioned PPV.
  • the node may then also decide depending on the value whether to drop the data packet. For example, this decision could be part of input filtering of the received data packets, before storing the data packets in a queue, such as the queue 320.
  • the above-mentioned guaranteed transmission of data packets may be efficiently applied in scenarios where the data traffic includes critical sporadic traffic. This may be achieved by classifying the data traffic according to the criticality with respect to loss or excessive delay of data packets to obtain multiple different classes of the data traffic.
  • the data packets of each class can be treated either as guaranteed data packets or as non-guaranteed data packets.
  • the decision which treatment is applied for a certain class depends on the traffic load of the other classes, in particular on whether one or more of the other classes temporarily have zero traffic load due to the data traffic of the class being sporadic.
  • a class which temporarily has zero traffic load will also be referred to as "empty".
  • the resource contingent configured on the basis of the worst case delay calculation would not be appropriately utilized in the time intervals when the traffic load of this class is zero. Accordingly, in the concepts as illustrated herein dimensioning of the resource contingents for enabling to meet the guarantee is performed for different scenarios, including scenarios where one or more of the classes of sporadic data traffic is empty and scenarios where these classes of sporadic data traffic are not empty. In the latter case, the data packets of one or more of the other classes can be treated as non-guaranteed data packets.
  • the data packets of at least one of these other classes may rather be treated as guaranteed data packets, by configuring a correspondingly dimensioned resource contingent for the at least one other class, at the cost of the resource contingent if the empty class of sporadic data traffic. Accordingly, it becomes possible to temporarily treat the data packets of more classes as guaranteed data packets, without requiring significant extra reservation of resources.
  • the following classes of data traffic could be defined:
  • Class 1 Critical Sporadic, traffic type 1 ,
  • Class 4 Normally critical continuous
  • Class 5 Less critical continuous.
  • the data packets of one or more of the other classes may be treated as guaranteed data packets.
  • different dimensioning groups are defined which consist of the classes where the data packets are to be treated as guaranteed data packets. Each dimensioning group applies to a specific scenario of some of the classes being empty. In the illustrated example, dimensioning groups could be defined as follows:
  • dimensioning group 1 applies to the scenario where neither class 1 nor class 2 is empty.
  • Dimensioning group 2 applies to the scenario where class 2 is empty, but class 1 is not empty.
  • Dimensioning group 3 applies to the scenario where class 1 is empty, but class 2 is not empty.
  • Dimensioning group 4 applies to the scenario where both class 1 and class 2 are empty.
  • each dimensioning group includes the same number of classes, which means that substantially the same amount of resources needs to be reserved in order to meet the guarantee.
  • the number of classes in the dimensioning groups could also vary if the classes vary with respect to the required amount of resources which needs to be reserved to meet the guarantee, e.g., due to different expected traffic load.
  • a maximum required resource reservation may be determined by the largest required resource reservation of the individual dimensioning groups.
  • the dimensioning of the resource contingents for each dimensioning group is performed under the assumption that only the data packets of the classes in the dimensioning group need to be treated as guaranteed data packets.
  • a corresponding resource contingent required to meet the guarantee for the data packets of the class may be configured as explained in connection with Figs. 3 to 6.
  • class 1 and class 2 are classes of critical sporadic data traffic and are considered to be more critical than the other classes. Due to the data traffic of these classes being sporadic, it is rarely transmitted and has time intervals where the traffic load is zero.
  • class 3 which is considered to have the most critical data traffic among classes 3, 4, and 5
  • Class 4 is considered to have data traffic which is less critical than the data traffic of class 3 but more critical than the data traffic of class 5.
  • the data packets of class 4 are treated as guaranteed data packets is at least one of class 1 and class 2 is empty.
  • class 5 which is considered to have the least critical data traffic among classes 3, 4, and 5
  • the data packets are treated as guaranteed data packets only if those class 1 and class 2 are empty.
  • one or more corresponding token buckets may be assigned to each of the classes and applied by the interleaved shaper 330 for outputting the data packets.
  • These token buckets may correspond to the per-flow token buckets TBi as explained in connection with Fig. 4.
  • a set of one or more per-flow token buckets TBi could be assigned to each class.
  • the data packets of the other classes may be treated as non-guaranteed data packets and dropped if there are insufficient available excess resources.
  • the classes may be defined by identifying flows of critical sporadic data traffic and assigning these flows to one or more classes. Multiple classes of critical sporadic data traffic could be defined by distinguishing the flows depending on the underlying traffic types, e.g., utilized protocols, and/or depending on a service or category of service to which the data traffic relates. Further, flows of critical continuous data traffic may be identified and assigned to one or more classes. Multiple classes of critical continuous data traffic may be defined by distinguishing the flows based on the level of criticality to loss or excessive delay of data packets. In addition, also the classes of critical continuous data traffic could be distinguished in terms of the underlying traffic types, e.g., utilized protocols, and/or depending on a service or category of service to which the data traffic relates.
  • Fig. 7 illustrates an exemplary method which may be applied for defining the dimensioning groups.
  • the method of Fig. 7 may for example be implemented by the management node 150 of Fig. 1 .
  • a node which is involved in the transmission of the data traffic such as one of the traffic sources 1 10 or the node 120
  • the definition of the dimensioning groups according to the method of Fig. 7 could be implemented by this node.
  • a first dimensioning groups is defined.
  • this first dimensioning group would be defined to include all classes for which the data packets are to be treated as guaranteed data packets in all scenarios.
  • the first dimensioning group may include all classes of critical sporadic data traffic and optionally also one or more classes of critical continuous data traffic, which loss of data packets or excessive delay of data packets is considered to be intolerable.
  • this first dimensioning group would correspond to dimensioning group 1 , which includes the classes 1 , 2, 3.
  • one or more other dimensioning groups are defined which exclude at least one of the classes with critical sporadic traffic.
  • these dimensioning groups would correspond to dimensioning group 2, which excludes class 2, dimensioning group 3, which includes class 1 , and dimensioning group 4, which excludes class 1 and class 2.
  • the resource contingent configurations may be determined for each of the dimensioning groups. This may involve determining a per-class resource contingent for each of the classes where the data packets are to be treated as guaranteed data packets.
  • the per-class resource contingents may be determined as explained above on the basis of worst case delay calculations.
  • step 740 it is checked if capacity requirements can be met for all dimensioning groups. That is to say, in some scenarios it may turn out that for one of the dimensioning groups there is not sufficient resource capacity to support the determined resource contingent configurations. If this is the case, the method may proceed to step 750, as indicated by branch "N".
  • the dimensioning group where the resource contingent configurations have the highest resource demand is modified. This may for example involve removing a class from the dimensioning group. Typically, this would be the class where data traffic is considered to be most tolerant with respect to loss or excessive delay of data packets. It is also possible to replace this class with some other class having potentially lower resource demand. The method may then return to step 730 to again determine the resource contingent configurations for each of the dimensioning groups.
  • the method may proceed to step 760, as indicated by branch "Y".
  • the determined resource contingent configurations may be applied. If the resource contingent configurations are determined by a management node, such as the management node 150, the management node may indicate the resource contingent configurations to one or more nodes involved in the transmission of the data traffic, such as to the traffic sources 1 10 or to the node 120. These nodes may then store the resource contingent configurations for each of the different dimensioning groups and select the appropriate resource contingent configurations depending on which of the classes of sporadic critical data traffic is empty. If the resource contingent configurations are determined by a node which itself is involved in the transmission of the data traffic, the node may directly store the resource contingent configurations for the different dimensioning groups.
  • the method of Fig. 7 may be applied for efficiently predetermining resource contingent configurations. Depending on whether one or more of the classes of critical sporadic data traffic is empty, these predetermined resource may then be selected and applied by a node which transmits the data traffic.
  • Fig. 8 illustrates an exemplary method which may be applied for by a node which transmits the data traffic.
  • the method of Fig. 8 may for example be applied by the traffic sources 1 10 or the node 120.
  • the method of Fig. 8 may be performed on the basis of predetermined resource contingent configurations as for example obtained by the method of Fig. 7.
  • the node determines which classes are empty. This may be accomplished on the basis of a fill level of a per-class resource contingent configured for each of the classes.
  • a per-class resource contingent could be defined by one or more token buckets, and the class could be determined as being empty if the token bucket is filled beyond a threshold or full.
  • threshold could for example be defined by the minimum size bi of the token bucket as explained in connection with Fig. 5.
  • one or more classes where the data packets are to be treated as guaranteed data packets are selected. This is accomplished depending on the classes found to be empty at step 810. Specifically, if one or more classes as sporadic critical data traffic are found to be empty at step 810, there is currently no need for treating data packets of these classes as guaranteed data packets. Accordingly, the data packets of one or more other classes may be instead treated as guaranteed data packets.
  • Each possible combination of empty classes may be mapped to a corresponding set of classes where the data packets are to be treated as guaranteed data packets. Such set of classes may for example correspond to one of the above-mentioned dimensioning groups.
  • a resource contingent configuration is selected for the classes where the data packets are to be treated as guaranteed data packets.
  • the resource contingent configuration may be selected which was determined for the dimensioning group corresponding to the set of classes as determined at step 820.
  • the data traffic may then be transmitted on the basis of the selected resource contingent configuration.
  • step 840 it may be checked if there was a change of the empty classes.
  • one or more further classes may have become empty, which may be detected as explained in connection with step 810 on the basis of the fill level of the per-class resource contingents.
  • one or more classes which were found to be empty may again have non-zero traffic load. The latter case may as well be detected on the basis of the per-class resource contingents.
  • a per-class resource contingent could be configured for this class, however with a smaller maximum size than needed to enable treatment of the data packets of this class as guaranteed data packets.
  • such smaller resource contingent based on a token bucket for non-guaranteed data traffic as explained in connection with Fig. 4.
  • a class may be defined to include multiple flows.
  • the per-class resource contingent may include a corresponding per-flow token bucket for each of the multiple flows, such as the above-mentioned per-flow token buckets 410.
  • the class may then be determined to be empty if all of the per-flow token buckets of the class are full.
  • the class may be considered as still being empty. However, if the per-class resource contingent is no longer full, the class may be determined as being no longer empty.
  • the method may return to step 810 to reassess which classes are empty and reselect the classes for which the data packets are to be treated as guaranteed data packets, as indicated by branch "Y". If no change of the empty classes is found, the check of step 840 may be repeated, e.g., according to a periodic pattern or in response to a triggering event. For example, the check of step 840 could be triggered at each transmission of a data packet.
  • Fig. 9 shows a flowchart for illustrating a method of handling data traffic in a data network.
  • the method of Fig. 9 may be utilized for implementing the illustrated concepts in a node of the data network which transmits the data traffic, such as one of the above-mentioned nodes 1 10, 120. If a processor-based implementation of the node is used, the steps of the method may be performed by one or more processors of the node. In such a case the node may further comprise a memory in which program code for implementing the below described functionalities is stored. In the method of Fig.
  • data packets can be treated either as a guaranteed data packet which is subject to a guarantee that the data packet is not dropped and not delayed by more than a certain delay limit or as a non-guaranteed data packet which is not subject to the guarantee, e.g., by handling the data packet as explained in connection with Fig. 3 to 6.
  • the node classifies the data packets into multiple different classes. This may be accomplished according to criticality of delay of the data packet and/or loss of the data packet.
  • the classes may include a first set of one or more classes with data packets of sporadic traffic which is critical to delay and/or loss of data packet, such as the classes 1 and 2 of above-mentioned example.
  • the classes may include a second set of one or more classes with data packets of continuous data traffic which is less critical to delay and/or loss of data packet than the sporadic traffic of the at least one first class, such as the classes 3, 4, and 5 of above-mentioned example.
  • the node determines those of the classes which have zero traffic load. For example, for each of the classes a per-class resource contingent for transmission of the data packets of the class may be configured. Depending on the fill level of the respective per-class resource contingent, the node may then determine whether the class has zero traffic load. If the per-class resource contingent is based on at least one token bucket, and the node may determine the class as having zero traffic load when the at least one token bucket is full.
  • a class may be defined to include multiple flows. In this case, the per-class resource contingent may include a corresponding per-flow token bucket for each of the multiple flows. The class may then be determined to have zero traffic load if all of the per- flow token buckets of the class are full.
  • the node decides, for at least one of the other classes, between transmitting the data packets of the class as guaranteed data packets and transmitting the data packets as non-guaranteed data packets which are not subject to the guarantee. This is accomplished depending on the classes which at step 920 were found to have zero traffic load.
  • the node may configure resource contingents for transmission of the data packets. For the data classes of which the data packets are to be treated as guaranteed data packets, this may be accomplished based on a worst case calculation of a delay experienced by a data packet.
  • the resource contingent may be configured with a maximum amount of resources which is at least equal to a minimum amount of resources required to meet the guarantee. In some scenarios, the resource contingent may be configured with a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee.
  • the non-guaranteed data packets may then be transmitted based on excess resources in excess of the minimum amount required to meet the guarantee, e.g., as explained in connection with Figs. 3 to 6.
  • the resource contingent for transmission of the guaranteed data packets may be configured on the basis of a predetermined resource contingent configuration.
  • a predetermined resource contingent configuration may have been obtained before by the method of Fig. 7.
  • Fig. 10 shows a block diagram for illustrating functionalities of a network node 1000 which operates according to the method of Fig. 9.
  • the network node 1000 may optionally be provided with a module 1010 configured to classify data packets into multiple different classes, such as explained in connection with step 910.
  • the network node 1000 may be provided with a module 1020 configured to determine one or more classes having zero traffic load, such as explained in connection with step 920.
  • the network node 1000 may be provided with a module 1030 configured to decide the treatment of the data packets of one or more other classes, such as explained in connection with step 930.
  • the network node 1000 may be provided with a module 1040 configured to configure resource contingents, such as explained in connection with step 940.
  • the network node 1000 may include further modules for implementing other functionalities, such as known functionalities of switch, router, or gateway for a data network. Further, it is noted that the modules of the network node 1000 do not necessarily represent a hardware structure of the network node 1000, but may also correspond to functional elements, e.g., implemented by hardware, software, or a combination thereof.
  • the configuration of the resource contingents as described in connection with step 940 and module 1040 could also be performed by some other node, e.g., another node involved in the transmission of the data packets on the redundant links, or by a separate management node, such as the management node 150.
  • a node might then not need to perform the other illustrated steps, e.g., steps 910, 920, or 930, and would not need to be provided with the other illustrated modules, e.g., modules 1010, 1020, or 1030.
  • Fig. 1 1 shows a flowchart for illustrating a method of handling data traffic in a data network.
  • the method of Fig. 1 1 may be utilized for implementing the illustrated concepts in a node of the data network which is responsible for determining resource contingent configurations, such as one of the above-mentioned management node 150.
  • the method could also be implemented by a node which actually transmits the data traffic, such as one of the nodes 1 10, 120.
  • the steps of the method may be performed by one or more processors of the node.
  • the node may further comprise a memory in which program code for implementing the below described functionalities is stored.
  • Fig. 1 1 it is assumed that data packets can be treated either as a guaranteed data packet which is subject to a guarantee that the data packet is not dropped and not delayed by more than a certain delay limit or as a non-guaranteed data packet which is not subject to the guarantee, e.g., by handling the data packet as explained in connection with Fig. 3 to 6.
  • the node determines multiple different classes of data packets.
  • the classes may be defined according to criticality of delay of the data packet and/or loss of the data packet.
  • the classes may include a first set of one or more classes with data packets of sporadic traffic which is critical to delay and/or loss of data packet, such as the classes 1 and 2 of above-mentioned example.
  • the classes may include a second set of one or more classes with data packets of continuous data traffic which is less critical to delay and/or loss of data packet than the sporadic traffic of the at least one first class, such as the classes 3, 4, and 5 of above-mentioned example.
  • the node determines multiple different groups of the classes for which the data packets are to be treated as guaranteed data packets. 14. If the groups include a first set of one or more classes with data packets of sporadic data traffic which is critical to delay and/or loss of data packet, and a second set of one or more classes with data packets of continuous data traffic which is less critical to delay and/or loss of data packet than the sporadic traffic of the at least one first class, the groups may include a first group including at least all the classes of the first set and least one second group including less than all the classes of the first set and at least one class of the second set.
  • the node determines configurations of resource contingents for transmission of the data packets. This is accomplished for each of the groups determined at step 1 120.
  • the resource contingent may be determined based on a worst case calculation of a delay experienced by a data packet.
  • the resource contingent may define a maximum amount of resources which is at least equal to a minimum amount of resources required to meet the guarantee. In some scenarios, the resource contingent may define a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee.
  • the non-guaranteed data packets may then be transmitted based on excess resources in excess of the minimum amount required to meet the guarantee, e.g., as explained in connection with Figs. 3 to 6.
  • the resource contingent configurations may then be applied for transmission of the data traffic, e.g., on the basis of the method of Fig. 9.
  • Fig. 12 shows a block diagram for illustrating functionalities of a network node 1000 which operates according to the method of Fig. 1 1 .
  • the network node 1200 may optionally be provided with a module 1210 configured to determine multiple different classes of data packets, such as explained in connection with step 1 1 10.
  • the network node 1200 may be provided with a module 1220 configured to determine groups of the classes, such as explained in connection with step 1 120.
  • the network node 1200 may be provided with a module 1230 configured to determine resource contingent configurations, such as explained in connection with step 1 130.
  • the network node 1200 may include further modules for implementing other functionalities, such as known functionalities of a management node, switch, router, or gateway for a data network. Further, it is noted that the modules of the network node 1200 do not necessarily represent a hardware structure of the network node 1200, but may also correspond to functional elements, e.g., implemented by hardware, software, or a combination thereof.
  • the illustrated concepts could also be implemented in a system including a first node operating according to the method of Fig. 9 and a second node operating according to the method of Fig. 1 1.
  • the second node could determine the configurations of resource contingents to be applied by the first node depending on the classes which are found to have zero traffic load.
  • Fig. 13 illustrates a processor-based implementation of a network node 1300 which may be used for implementing the above described concepts.
  • the network node 1300 may for example correspond to a switch, router or gateway of the data network, or to a management node.
  • the network node 1300 includes an input interface 1310 and an output interface 1320.
  • the input interface 1310 may be used for receiving data packets, e.g., from other nodes of the data network.
  • the output interface 1320 may be used for transmitting the data packets, e.g., to other nodes of the data network.
  • the input interface 1310 and output interface 1320 could also be replaced by one or more management or control interfaces with respect to other nodes of the data network.
  • the network node 1300 may include one or more processors 1350 coupled to the interfaces 1310, 1320 and a memory 1360 coupled to the processor(s) 1350.
  • the interfaces 1310, 1320, the processor(s) 1350, and the memory 1360 could be coupled by one or more internal bus systems of the network node 1300.
  • the memory 1360 may include a Read Only Memory (ROM), e.g., a flash ROM, a Random Access Memory (RAM), e.g., a Dynamic RAM (DRAM) or Static RAM (SRAM), a mass storage, e.g., a hard disk or solid state disk, or the like.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • DRAM Dynamic RAM
  • SRAM Static RAM
  • mass storage e.g., a hard disk or solid state disk, or the like.
  • the memory 1 160 may include software 1370, firmware 1380, and/or control parameters 1390.
  • the memory 1360 may include suitably configured program code to be executed by the processor(s) 1350 so as to implement the above-described functionalities of a network node, such as explained in connection with Fig. 9 or 1 1 .
  • the structures as illustrated in Fig. 13 are merely schematic and that the network node 1300 may actually include further components which, for the sake of clarity, have not been illustrated, e.g., further interfaces or processors.
  • the memory 1360 may include further program code for implementing known functionalities of a network node, e.g., known functionalities of a switch, router, gateway, or management node.
  • a computer program may be provided for implementing functionalities of the network node 1300, e.g., in the form of a physical medium storing the program code and/or other data to be stored in the memory 1360 or by making the program code available for download or by streaming.
  • the concepts as described above may be used for efficiently transmitting data traffic in scenarios where the data traffic includes sporadic critical data traffic and other critical data traffic.
  • the data traffic includes sporadic critical data traffic and other critical data traffic.
  • resources can be utilized in an efficient manner without adversely affecting the sporadic critical data traffic.
  • the examples and embodiments as explained above are merely illustrative and susceptible to various modifications.
  • the illustrated concepts may be applied in connection with various kinds of data networks, without limitation to the above-mentioned example of a transport network part of a wireless communication network.
  • the illustrated concepts may be applied in various kinds of nodes, including without limitation to the above-mentioned examples of a switch, router, or gateway. Moreover, it is to be understood that the above concepts may be implemented by using correspondingly designed software to be executed by one or more processors of an existing device, or by using dedicated device hardware. Further, it should be noted that the illustrated nodes or devices may each be implemented as a single device or as a system of multiple interacting devices.

Abstract

A node (110, 120) of a data network classifies data packets into multiple different classes. Further, the node (110, 120) determines those of the classes which have zero traffic load. Depending on the classes which have zero traffic load, the node (110, 120) decides for at least one of the other classes between transmitting the data packets of the class as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit and transmitting the data packets as non-guaranteed data packets which are not subject to the guarantee.

Description

Efficient handling of loss and/or delay sensitive sporadic data traffic
Technical Field
The present invention relates to methods for handling data traffic in a data network and to corresponding devices and systems.
Background
In communication networks, e.g., based on the Internet Protocol (IP) and the Transmission Control Protocol (TCP), various kinds of data traffic are transferred. Such different kinds data traffic may differ with respect to their sensitivity concerning delay which occurs while data packets of the data traffic are forwarded through the communication network, e.g., in terms of a per-hop delay or an end-to-end delay. For example, for data packets of a file download the delay of the data packets is typically not very relevant. However, in the case of realtime data transfers, such as multimedia streaming, excessive delay of a data packet may adversely impact the user experience because typically data packets need to be available at the receiver at a certain time, and later received data packets are useless. Further, certain types of traffic may also be loss-sensitive, so that it may be desirable to control the forwarding of the data packets in such a way that dropping of data packets in avoided as far as possible.
In this respect, it is known to accomplish forwarding of delay sensitive traffic using a scheduling mechanism which provides guarantees with respect to packet losses and delays, as for example suggested in "Urgency-Based Scheduler for Time-Sensitive Switched Ethernet Networks" by J. Specht and Soheil Samii, 28th Euromicro Conference on Real-Time Systems (ECRTS), Toulouse, France, July 5-8, 2016. However, this approach involves reserving a resource contingent for all kinds of traffic to which the guarantee shall apply, even if such traffic may occur only sporadically. This may result in inefficient utilization of resources.
Accordingly, there is a need for techniques which allow for efficiently address situations in which data traffic which is sensitive to delay or loss of data packets occurs only sporadically.
Summary According to an embodiment of the invention, a method of handling data traffic in a data network is provided. According to the method, a node of the data network classifies data packets into multiple different classes. Further, the node determines those of the classes which have zero traffic load. Depending on the classes which have zero traffic load, the node decides for at least one of the other classes between transmitting the data packets of the class as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit and transmitting the data packets as non- guaranteed data packets which are not subject to the guarantee. According to a further embodiment of the invention, a method of handling data traffic in a data network is provided. According to the method, a node of the data network determines multiple different classes of data packets. Further, the node determines multiple different groups of the classes for which the data packets are to be treated as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit. For each of the groups, the node determines, based on a worst case calculation of a delay experienced by a data packet, a configuration of a resource contingent for transmission of the guaranteed data packets. The configuration defines a maximum amount of resources which corresponds to at least a minimum amount of resources required to meet the guarantee.
According to a further embodiment of the invention, a node for a data network is provided. The node is configured to classify data packets into multiple different classes. Further, the node is configured to determine those of the classes which have zero traffic load. Further, the node is configured to, depending on the classes which have zero traffic load, decide for at least one of the other classes between transmitting the data packets of the class as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit and transmitting the data packets as non- guaranteed data packets which are not subject to the guarantee. According to a further embodiment of the invention, a node for a data network is provided. The node is configured to determine multiple different classes of data packets. Further, the node is configured to determine multiple different groups of the classes for which the data packets are to be treated as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit. For each of the groups, the node is configured to determine, based on a worst case calculation of a delay experienced by a data packet, a configuration of a resource contingent for transmission of the guaranteed data packets. The configuration defines a maximum amount of resources which corresponds to at least a minimum amount of resources required to meet the guarantee.
According to a further embodiment of the invention, a computer program or computer program product is provided, e.g., in the form of a non-transitory storage medium, which comprises program code to be executed by at least one processor of an node for a data network. Execution of the program code causes the node to classify data packets into multiple different classes. Further, execution of the program code causes the node to determine those of the classes which have zero traffic load. Further, execution of the program code causes the node to, depending on the classes which have zero traffic load, decide for at least one of the other classes between transmitting the data packets of the class as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit and transmitting the data packets as non- guaranteed data packets which are not subject to the guarantee.
According to a further embodiment of the invention, a computer program or computer program product is provided, e.g., in the form of a non-transitory storage medium, which comprises program code to be executed by at least one processor of an node for a data network. Execution of the program code causes the node to determine multiple different classes of data packets. Further, execution of the program code causes the node to determine multiple different groups of the classes for which the data packets are to be treated as guaranteed data packets which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit. For each of the groups, execution of the program code causes the node to determine, based on a worst case calculation of a delay experienced by a data packet, a configuration of a resource contingent for transmission of the guaranteed data packets. The configuration defines a maximum amount of resources which corresponds to at least a minimum amount of resources required to meet the guarantee. Details of such embodiments and further embodiments will be apparent from the following detailed description of embodiments.
Brief Description of the Drawings Fig. 1 schematically illustrates a scenario in which data traffic is handled according to an embodiment of the invention. Fig. 2 schematically an example of a data packet as utilized in an embodiment of the invention.
Fig. 3 schematically illustrates an architecture of a scheduler according to an embodiment of the invention.
Fig. 4 schematically illustrates a set of token buckets as utilized according to an embodiment of the invention. Fig. 5 shows an example of configuring a token bucket according to an embodiment of the invention.
Fig. 6 shows a flowchart for schematically illustrating a method of controlling forwarding of data packets according to an embodiment of the invention.
Fig. 7 shows a flowchart for schematically illustrating a method of controlling treatment of traffic classes according to an embodiment of the invention.
Fig. 8 shows a method of determining resource contingents according to an embodiment of the invention.
Fig. 9 shows a method of handling data traffic according to an embodiment of the invention.
Fig. 10 shows a block diagram for illustrating functionalities of a network node according to an embodiment of the invention.
Fig. 1 1 shows a method of handling data traffic according to an embodiment of the invention.
Fig. 12 shows a block diagram for illustrating functionalities of a network node according to an embodiment of the invention.
Fig. 13 schematically illustrates structures of a network node according to an embodiment of the invention.
Detailed Description of Embodiments In the following, concepts according to embodiments of the invention will be explained in more detail by referring to the accompanying drawings. The illustrated concepts relate to handling data traffic in a data network. Specifically, the concepts relate to controlling forwarding of data packets of the data traffic by a node of such data network. The data network may for example be part of a communication network. One example of such communication network is a wireless communication network, e.g., based on GSM (Global System for Mobile Communication), UMTS (Universal Mobile Telecommunications System), or LTE (Long Term Evolution) technologies specified by 3GPP (3rd Generation Partnership Project). For example, the data network implement be a transport network part of such wireless communication network. However, the concepts could also be applied in other types of communication systems or data networks. The data packets may be IP data packets, optionally in connection with further protocols, e.g., an Ethernet framing protocol, TCP, UDP (User Datagram Protocol), or a tunneling protocol such as GTP (General Packet Radio Service Tunneling Protocol).
In the concepts as illustrated in the following, network nodes, e.g., a one or more switches, routers, or gateways, transmit data traffic. An example of a corresponding scenario is illustrated in Fig. 1 , where data traffic provided by multiple traffic sources 1 10 is forwarded by a network node 120 to multiple traffic destinations 130. Further, Fig. 1 shows a management node 150 which may be responsible for configuring the nodes responsible for transmission of the data traffic, such as the traffic sources 1 10 or the node 120. In particular, such management node 150 may be responsible for dimensioning the capacity of links used for transmission of the data traffic, e.g., by configuring resource contingents. In the scenario of Fig. 1 , each pair of a certain traffic source 1 10 and a certain traffic destination may define a flow. For example, in the case of data traffic based on IP data packets, the packets of the same flow may carry the same source address and the same destination address. The data traffic is assumed to include data traffic which is sporadic, i.e., has a variable traffic load with time intervals of more than a certain minimum duration where the traffic load is zero. The minimum duration may be in the range of 1 ms to 1 s, although this value may differ between different types of sporadic traffic. In some cases the sporadic data traffic can very sparse. An extreme case would be data traffic which occurs only once in a lifetime of a certain device, e.g., data traffic related to deployment of an airbag. In less extreme cases, the sporadic data traffic could have the form of short bursts. Such bursts may have various lengths and occur at various intervals, e.g., bursts of some seconds to minutes which typically occur about once in a week to once in a day, bursts of some seconds which occur about once in an hour to once in a day, or bursts of some milliseconds which occur about once in a second to once in a minute. Data traffic which is not sporadic will in the following also be referred to as continuous data traffic. The sporadic data traffic may for example be distinguished on flow level from other data traffic. That is to say, by considering the transmission timing of the data packets of a certain flow, a flow can be categorized as either carrying sporadic data traffic or as carrying continuous data traffic. However, it is noted that the sporadic traffic could also be defined on other levels, such as on the level of traffic type, e.g., by considering one or more underlying protocols of the data traffic, or on the level of service types, e.g., by considering the category of a service which generates the data traffic or even the individual service which generates the data traffic. Further, it is of course also possible to any combination of the above-mentioned levels to define the sporadic data traffic. For example, the sporadic data traffic could be a flow associated with a certain traffic type and/or service type.
Further, the data traffic may also be subject to various requirements concerning the loss of data packets or delays of data packets. For some data traffic, loss of data packets or excessive delay of data packets may be critical. For example, this could be the case if the data traffic conveys realtime control information for a robotic system. For other data traffic, loss of data packets or excessive delay of data packets may be less critical, e.g., for data traffic related to audio or video streaming. Further, in some cases there may be data traffic which is sporadic and sensitive to loss of data packets or excessive delay of data packets, e.g., data traffic carrying alarm messages by a robotic system. In the examples as further explained below, it is assumed that the data traffic can be classified in terms of its criticality of loss or excessive delay of data packets.
In order to accommodate the above-mentioned requirements of the data traffic, the data traffic may be subject to different treatment. In particular, data packets of the data traffic may be treated as either guaranteed data packets or non-guaranteed data packets. The guaranteed data packets are subject to a guarantee that the data packet forwarded by the node is not delayed by more than a certain delay limit and not dropped. This enables to meet the requirements of critical data traffic. Here, it is to be understood that in the case of multi-hop transmission, the guarantee would need to be met on all links included in a multi- hop path, e.g., on a link from the traffic source 1 10 to the node 120 and on a link from the node 120 to the traffic destination 130. For the non-guaranteed data packets this guarantee does not apply, i.e., the non-guaranteed data packets may be dropped. Accordingly, the non-guaranteed data packets may be transmitted in a resource efficient manner.
In some scenarios the non-guaranteed data packets may nonetheless benefit from the delay limit as defined for the guaranteed data packets. That is to say, if a non-guaranteed data packet is not dropped, it could nonetheless be transmitted within the same delay limit as guaranteed for the guaranteed data packets.
The transmission of the data traffic by the network nodes may be managed by a respective scheduler of the node. By way of example, Fig. 1 illustrates a scheduler 125 of the node 120. The scheduler 125 manages forwarding of the data traffic by the node 120. However, it is noted that also other nodes which transmit the data traffic, e.g., the traffic sources 1 10 or intermediate nodes between the traffic sources 1 10 and the node 120 or intermediate nodes between the node 120 and the traffic destinations 130 may be provided with such a scheduler.
In each case, the scheduler operates on the basis of a scheduling algorithm which enables to meet the guarantee for the guaranteed data packets. For this purpose, the scheduling algorithm reserves one or more resource contingents which are filled with sufficient resources to meet the guarantee. By way of example, the resource contingent(s) may be managed on the basis of one or more token buckets and a filling rate of the token bucket(s) and size of the token bucket(s) be set in such a way that the guarantee is met. This is accomplished on the basis of a worst case calculation for the delay experienced by a transmitted data packet.
The worst case calculation may be based on known, estimated, or measured characteristics of the data traffic transmitted by the node, e.g., data rates, maximum size, or burstiness of the data traffic. By way of example, in the case of higher burstiness and/or higher data rate of a certain flow, a larger resource contingent may be needed to meet the guarantee. On the other hand, the maximum size of the resource contingent should be limited because the worst case delay is found to be minimal when the maximum size of the resource contingent is equal to the minimum amount of resources required to meet the guarantee and increases with the maximum size of the resource contingent. This can be attributed to an overall limitation of the available resources. For example, if the amount of reserved resources increases, this means that transmission of more data packets over the same bottleneck (e.g., an interface with limited capacity) is admitted, which typically results in increased worst case delay for transmission of data packets over this bottleneck.
The worst case delay calculation may be based on various models or calculation methods. An example of how the worst case delay can be calculated in a scenario in which multiple token buckets are used for providing a delay guarantee for multiple flows is given in "Urgency-Based Scheduler for Time-Sensitive Switched Ethernet Networks" by J. Specht and Soheil Samii, 28th Euromicro Conference on Real-Time Systems (ECRTS), Toulouse, France, July 5-8, 2016.
In the illustrated concepts, the maximum size of the resource contingent(s) may be set to be larger than a minimum size needed to meet the guarantee. This increased size of the resource contingent(s) is considered in the worst-case delay calculation. In this way, the resource contingent may include resources in excess of the minimum amount required to meet the guarantee, in the following referred to as excess resources. Further, excess resources may also be present due to the characteristics of the data traffic which is subject to the guarantee. In particular, if the data traffic is sporadic, it will typically have peak traffic loads which are much higher than the average traffic load. Since the worst case calculation typically needs to consider situations where the peak traffic load is present, the dimensioning of the resource contingent based on the worst case calculation will have the effect that excess resources are present under average traffic load.
The scheduler may use any excess resources for forwarding non-guaranteed data packets. In particular, if sufficient excess resources are present, the scheduler may use these excess resources for forwarding one or more non-guaranteed data packet. If no sufficient excess resources are present, the scheduler may decide to drop the non-guaranteed data packet. Accordingly, resources not used for forwarding the guaranteed data packets can be efficiently used for forwarding the non-guaranteed data packets.
It is noted that the elements of Fig. 1 are merely illustrative and that the data network may include additional nodes and that various connection topologies are possible in such data communication network. For example, such additional nodes could be intermediate nodes between the traffic sources 1 10 and the network node 120 or intermediate nodes between the network node 120 and the traffic destinations 130. Such intermediate nodes could forward the data traffic in a similar manner as explained for the network node 120. The distinction between the guaranteed data packets and the non-guaranteed data packets may be based on a marking of the data packets with a value, in the following referred to as packet value. An example of how this may be accomplished is illustrated in Fig. 2.
The distinction between the guaranteed data packets and the non-guaranteed data packets may be based on a marking of the data packets with a value, in the following referred to as per packet value (PPV). An example of how this may be accomplished is illustrated in Fig. 2. Fig. 2 shows an exemplary data packet 200. The data packet may be an IP data packet. Further, the data packet 200 may be based on various kinds of additional protocols, e.g., a transport protocol such as TCP or UDP, or a tunnelling protocol such as GTP. As illustrated, the data packet 200 includes a header section 210 and a payload section 220. The payload section 220 may for example include user data. If a tunnelling protocol is utilized, the payload section 220 may also include one or more encapsulated data packets. The header section 210 typically includes various kinds of information which is needed for propagation of the data packet 200 through the data communication system. For example, such information may include a destination address and/or source address.
As further illustrated, the header section 210 includes a label 212 indicating the PPV. The label 212 may include a scalar value, e.g., in the range of 0 to 255, to indicate the PPV. The label 214 may for example be included in a corresponding information fields in the header section 210. For this purpose, a corresponding information field may be defined for the above-mentioned protocols or one or more existing information fields may be reused. As further illustrated, the header section may also include a delay indicator 214. The delay indicator may for example be used for determining a delay class of the data packet. Different delay limits may be defined depending on the delay indicator. The PPV may represent a level of importance of the data packet, e.g., in terms of a network- level gain when the data packet is delivered. Accordingly, nodes of the data network, including the network node 120, should aim at utilizing their available resources to maximize the total PPV of the successfully transmitted data packets. The PPV may be considered in relation to the number of bits in the data packet, i.e., the value included in the label 212 may be treated as a value per bit, which enables direct comparison of data packets of different sizes. Accordingly, for the same marking in the label 212, a data larger packet may would have a higher PPV than a smaller data packet. On the other hand, transmission of the larger data packet requires more resources than transmission of the shorter data packet. The PPV may be set by an operator of the data network according to various criteria, e.g., by assigning a higher PPV to data traffic of premium users or emergency traffic. Accordingly, the PPV may be used to express the importance of data packets relative to each other, which in turn may be utilized by the nodes 1 10, 120 (or other nodes of the data network) for controlling how to utilize their available resources for transmission the data packets, e.g., by using resources for forwarding a data packet with high PPV at the expense of a data packet with low PPV, which may then be delayed or even dropped.
For utilizing the PPV for distinguishing between guaranteed data packets and non- guaranteed data packets, a threshold may be defined. Based on a comparison of the PPV to the threshold, the network node 120 can decide whether the data packet is a guaranteed data packet or a non-guaranteed data packet. In particular, if for a given data packet the PPV exceeds the threshold the network node 120 may treat the data packet as a guaranteed data packet. Otherwise, the network node 120 may treat the data packet as a non-guaranteed data packet.
The delay indicator 214 may for example indicate a maximum delay the data packet may experience when being forwarded, e.g., in terms of a per-hop delay or in terms of an end- to-end delay. This information may then be applied by the network node 120 for setting the above-mentioned delay limit of the guarantee.
The scheduler may thus operate by providing the guarantee with respect to loss and delay and using the PPV for deciding whether a certain data packet is to be treated as a guaranteed data packet or as a non-guaranteed data packet. The guaranteed data packets, e.g., the data packets for which the PPV is above the threshold, may then be subjected to traffic shaping. The non-guaranteed packets may be filtered, either before being stored in a queue or when being output from the queue.
Fig. 3 schematically illustrates an exemplary architecture which may be used for implementing the above-described loss and delay guaranteed transmission of the data packets. For example, the architecture of Fig. 3 may be used to implement the above- mentioned scheduler.
As illustrated, the architecture of Fig. 3 includes an input filter 310 in which the data packets 200 received by the network node 120 subjected to input filtering. Further, the architecture includes a queue 320 for temporarily storing the data packets 200 passed through the input filter 310, and an interleaved shaper 330 for controlling forwarding of the data packets 200 from the queue 320. Further, the architecture provides a controller 340 which may be used for tuning operation of the input filter 310 and/or of the interleaved shaper 330. For example, the controller may collect statistics from the input filter 310, e.g., an average drop rate, from the queue 320, e.g., a queue length and/or an average queueing time, and/or from the interleaved shaper 330, e.g., an average drop rate or an average delay, and tune the operation of the input filter 310 and/or of the interleaved shaper 330 depending on the collected statistics.
The input filtering by the input filter 310 involves determining for each of the data packets 200 whether the data packet 200 is a guaranteed data packet or a non-guaranteed data packet. The input filter 310 passes the guaranteed data packets to a queue 320. In the case of the non-guaranteed data packets, the input filter 310 can decide between dropping the non-guaranteed data packet 200 or passing the non-guaranteed data packet 200 to the queue 320. This may be accomplished depending on the PPV. Further, the input filter 310 may also decide depending on a resource contingent managed on the basis of a set 312 of one or more token buckets (TB) whether to drop the non-guaranteed data packet 200. This may also consider the size of the non-guaranteed data packet 200. For example, if there are sufficient resources for further processing a non-guaranteed data packet 200 with size L, i.e., if there are sufficient tokens in a token bucket for the size L, the input filter 310 may pass the non-guaranteed data packet 200 to the queue 320. A function f(PPV, L) may be applied for determining the number of tokens required to let the non-guaranteed data packet 200 pass. Here, the number of required tokens will typically increase with increasing size L of the packet, but decrease with increasing packet value V. Accordingly, non-guaranteed data packets 200 with higher packet value have a higher likelihood of being passed to the queue 320. The controller 340 may tune parameters of the function f(PPV, L) depending on the statistics provided by the input filter 310, the queue 320, and/or the interleaved shaper 330.
The interleaved shaper 330 controls forwarding of the data packets 200 from the queue 320. This involves taking the first data packet 200 from the queue 320 and again determining for whether the data packet 200 is a guaranteed data packet or a non- guaranteed data packet. If the data packet 200 is a guaranteed data packet, it is forwarded by the interleaved shaper 330, without delaying it in excess of the delay limit. If the data packet 200 is a non-guaranteed data packet, the interleaved shaper 330 may decide between dropping the non-guaranteed data packet 200 or forwarding the non-guaranteed data packet 200.
The interleaved shaper 330 may utilize a resource contingent managed on the basis of a set 332 of one or more token buckets (TB) whether to drop a non-guaranteed data packet 200 and when to forward a guaranteed data packet 200. This may also consider the size of the data packet 200. The interleaved shaper 330 forwards a guaranteed data packet 200 when there are sufficient tokens in a corresponding token bucket. The interleaved shaper 330 forwards a non-guaranteed data packet 200 only if there are sufficient excess resources. This may be the case if a token bucket is filled beyond a minimum amount of tokens which is required to meet the delay guarantee. For example, if there are sufficient excess resources for forwarding a non-guaranteed data packet 200 with size L, i.e., if there are sufficient excess tokens in a token bucket for the size L, the interleaved shaper 330 may forward the non-guaranteed data packet 200, using the excess tokens. A function g(PPV, L) may be applied for determining the number of tokens required for forwarding a guaranteed or non-guaranteed data packet 200. Here, the number of required tokens will typically increase with increasing size L of the packet, but decrease with increasing PPV. Accordingly, non-guaranteed data packets 200 with higher PPV have a higher likelihood of being forwarded by the interleaved shaper 330. The controller 340 may tune parameters of the function g(PPV, L) depending on the statistics provided by the input filter 310, the queue 320, and/or the interleaved shaper 330.
Fig. 4 shows an example of a set 400 of token buckets 410, 420, 430 which could be used in the input filter 310 and/or in the interleaved shaper. As illustrated, the set 400 includes per flow buckets 410, denoted as TBi, where i is an index of the corresponding flow. The per flow token buckets 410 include one token bucket 410 per flow of the data traffic transmitted by the network node. Further, the set includes a token bucket 420 for non- guaranteed traffic, denoted as TBng. The token bucket 420 for non-guaranteed traffic may be used for managing resources which are dedicated to be used for forwarding the non- guaranteed data packets 200. For example, the token bucket 420 could be filled at a rate which corresponds to an estimated or desirable proportion of non-guaranteed data traffic to be served by the network node 120. As further detailed below, the token bucket 420 could also be used as an overflow for the per-flow token buckets 410. In this case, excess tokens which do not fit into the per-flow token buckets 410 could be collected in the token bucket 420 and the token bucket 420 thus be utilized for managing the excess resources to be used for forwarding the non-guaranteed data traffic. As further illustrated, the set 400 includes an aggregate token bucket 430, denoted as TBag. The token bucket 430 may for example be used for managing an overall resource contingent, which can for example be used as a basis for deciding whether the input filter 310 should drop a non-guaranteed data packet 200.
As mentioned above, the transmission of the non-guaranteed data packets 200 may be based on the availability of sufficient excess resources, i.e., resources in excess of the minimum amount of resources to meet the guarantee for the guaranteed data packets 200. For this purpose, an extra space is added to the reserved resource contingent(s). In other words, the maximum size of the reserved resource contingent(s) is set to be larger than actually required to meet the guarantee. The increased size of the reserved resource contingent(s) is considered in the worst case calculation of the delay, thereby making sure that also with the increased size of the reserved resource contingent(s) the guarantee is still met.
According to one option, the extra space of the resource contingent(s) can be provided by adding an extra space to one or more of the per-flow token buckets 410. This is illustrated by Fig. 5, which shows a configuration of an exemplary token bucket. As illustrated, the token bucket has a size B. The size B is larger than a minimum size bt required to meet the guarantee. In particular, the size B corresponds to the minimum size bt plus an extra space denoted by bng. The minimum size bi may for example depend on the expected burstiness and/or expected maximum size of the data packets 200. Due to the increased size, the token bucket can hold excess tokens, i.e., be filled to a level bt in excess of the minimum size bi.
According to a further option, the size of one or more of the per-flow token buckets 410 may be set to the minimum size bt required to meet the guarantee, and if these per-flow token buckets 410 are full, the overflowing tokens may be added to another token bucket, e.g., to the token bucket 420. The token bucket 420 could otherwise be configured with a fill rate of zero, i.e., only be filled with the overflowing token buckets. The token bucket 420 could thus be exclusively used for collecting excess tokens (from one or more other token buckets).
The decision whether a non-guaranteed data packet 200 can be forwarded on the basis of the available excess resources may be based on the amount of excess token buckets.
For example, when collecting the excess tokens in the extra space of the per-flow token buckets 410, the interleaved shaper 330 may decide to drop a non-guaranteed data packet 200 of size L unless the amount of excess tokens in one of the per-flow token buckets 410 exceeds the value g(PPV,L) for this data packet 200. If the amount of excess tokens in one of the per-flow token buckets 410 exceeds the value g(PPV,L), the interleaved shaper 330 may decide to forward the non-guaranteed data packet 200 using the excess tokens, i.e., taking an amount of tokens from the per-flow token bucket 410 which is given by g(PPV,L).
When collecting the excess tokens in a dedicated token bucket, e.g., in the token bucket 420, the interleaved shaper 330 may decide to drop a non-guaranteed data packet 200 of size L unless the amount of excess tokens in this token bucket exceeds the value g(PPV,L) for this data packet 200. If the amount of excess tokens in one of the per-flow token buckets 410 exceeds the value g(PPV,L), the interleaved shaper 330 may decide to forward the non-guaranteed data packet 200 using the excess tokens, i.e., taking an amount of tokens from the per-flow token bucket 410 which is given by g(PPV,L). Fig. 6 shows a flowchart for illustrating a method of controlling forwarding of data packets in a data network. The method of Fig. 6 may be utilized for implementing the transmission of the data packets on one of the redundant links, e.g., by one of the traffic sources 1 10 or by the node 120. For example, the method could be implemented by a scheduler of the node, such as the above-mentioned scheduler. If a processor-based implementation of the node is used, the steps of the method may be performed by one or more processors of the node. In such a case the node may further comprise a memory in which program code for implementing the below described functionalities is stored.
In the method of Fig. 6, it is assumed that the data packets to be transmitted by the node can each be a guaranteed data packet which is subject to a guarantee that the data packet is not dropped and not delayed by more than a certain delay limit or a non-guaranteed data packet which is not subject to the guarantee. Further, it is assumed that the transmission of the data packets on the redundant link is based on one or more resource contingents configured with a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee. The maximum size is configured on the basis of a worst case calculation of a delay experienced by a data packet forwarded by the node. The node assigns resources to the resource contingent and identifies resources in excess of the minimum amount as excess resources. The maximums size of the resource contingent is configured based on a worst case calculation of a delay experienced by a data packet forwarded by the node.
The resource contingent may be managed on the basis of a token bucket, e.g., one of the above-mentioned token buckets 410, 420, 430. The node may then assign resources to the resource contingent by adding tokens to the token bucket. A size of the token bucket may then correspond to the maximum amount of resources of the resource contingent, e.g., as illustrated in the example of Fig. 5. The token bucket may also be configured with a size corresponding to the minimum amount of resources required to meet the guarantee. In this case, a further token bucket may be configured for the excess resources and the node may add tokens to the further token bucket only if the token bucket is full. In other words, the further token bucket may be used for receiving overflowing tokens from the token bucket.
In some scenarios, the received data packets may be part of multiple flows. In this case a corresponding resource contingent with a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee may be configured for each of the flows. The node may then assign resources to the corresponding resource contingent and identifies resources in excess of the minimum amount (in any of the different contingents) as excess resources. If the received data packets are part of multiple flows the corresponding resource contingent for each of the flows may be managed on the basis of a corresponding token bucket, such as one of the above-mentioned per flow buckets 410. For each of the flows the node may then assign resources to the corresponding resource contingent by adding tokens to the corresponding token bucket. For each of the flows a size of the corresponding token bucket may corresponds to the maximum amount of resources of the corresponding resource contingent, e.g., as illustrated in the example of Fig. 5.
In some scenarios, the corresponding token bucket for each of the flows may also be configured with a size corresponding to the minimum amount of resources required to meet the guarantee. In this case, a further token bucket may be configured for the excess resources and the node may add tokens to the further token bucket only if one of the corresponding token buckets of the resource contingents for the flows is full. Accordingly, the further token bucket may be used for receiving overflowing tokens from the other token buckets. For example, the above-mentioned token bucket 420 could be used for receiving overflowing token buckets from the above-mentioned per flow token buckets 410.
At step 610, the node may get a data packet to be transmitted on the redundant link. For example, node may get the data packet, e.g., one of the above-mentioned data packets 200, from an interface with respect to another node of the data network or from a queue in which the data packet is temporarily stored.
At step 620, the node determines whether the data packet is a guaranteed data packet which is subject to a guarantee that the data packet is not dropped and not delayed by more than a certain delay limit or a non-guaranteed data packet which is not subject to the guarantee. In the example of Fig. 6, the node determines whether the data packet is a guaranteed data packet, as indicated by branch "Y". If this is not the case, the node determines that the data packet is a non-guaranteed data packet, as indicated by branch "N".
In some scenarios, each of the data packets received by the node is marked with a value, e.g., the above-mentioned PPV. The node may then determine depending on the value whether the data packet is a guaranteed data packet or a non-guaranteed data packet, e.g., by comparing the value to a threshold. For example, if the value is above the threshold the node may determine that the data packet is a guaranteed data packet. The determination of step 620 may also depend on a size of the data packet. For example, the value marking the packet may be treated as a value per bit of the data packet, i.e., the value could be proportional to the size of the data packet. If the data packet is found to be a guaranteed data packet, the node may proceed to step 630 and serve the guaranteed data packet. This may involve forwarding the guaranteed data packet based on the resources from the resource contingent. In some cases, the node may wait with the forwarding until sufficient resources are available in the resource contingent. If there are multiple flows with corresponding resource contingents, the node may forward the data packet based on based on the resources in the resource contingent corresponding to the flow the data packet is part of. From step 630, the node may return to step 610 to proceed with getting a next data packet.
If the data packet is found to be a non-guaranteed data packet, the node may proceed to step 640 and check if sufficient excess resources are available for forwarding the non- guaranteed data packet.
In some scenarios, each of the data packets received by the node is marked with a value, e.g., the above-mentioned PPV. The node may then determine depending on the value whether the sufficient excess resources are present. For example, the above-mentioned function g(PPV,L) orf(PPV,L) may be applied to check whether sufficient excess tokens are available.
If sufficient excess tokens are found to be available at step 640, as indicated by branch "Y", the node may proceed to step 650 and serve the non-guaranteed data packet by forwarding the non-guaranteed data packet based on the excess resources. Since in this case the non- guaranteed data packet can be forwarded without significant further delay, it is forwarded within the same delay limit as defined for the guaranteed data packet. Accordingly, even though the data packet is non-guaranteed, it may benefit from the guaranteed delay limit. From step 650, the node may return to step 610 to proceed with getting a next data packet.
If no sufficient excess tokens are found to be available at step 640, as indicated by branch "N", the node may proceed to step 660 and drop the non-guaranteed data packet. From step 660, the node may return to step 610 to proceed with getting a next data packet.
In some scenarios, each of the data packets received by the node is marked with a value, e.g., the above-mentioned PPV. In response to determining that the data packet is a non- guaranteed data packet, the node may then also decide depending on the value whether to drop the data packet. For example, this decision could be part of input filtering of the received data packets, before storing the data packets in a queue, such as the queue 320.
In the following, it will be further explained how the above-mentioned guaranteed transmission of data packets may be efficiently applied in scenarios where the data traffic includes critical sporadic traffic. This may be achieved by classifying the data traffic according to the criticality with respect to loss or excessive delay of data packets to obtain multiple different classes of the data traffic. The data packets of each class can be treated either as guaranteed data packets or as non-guaranteed data packets. The decision which treatment is applied for a certain class depends on the traffic load of the other classes, in particular on whether one or more of the other classes temporarily have zero traffic load due to the data traffic of the class being sporadic. In the following, a class which temporarily has zero traffic load will also be referred to as "empty". As a general rule, it would be desirable to treat the data packets of all classes of critical data traffic as guaranteed data packets. However, this may result in inefficient utilization of resources because some of the classes include sporadic data traffic. For these classes the resource contingent configured on the basis of the worst case delay calculation would not be appropriately utilized in the time intervals when the traffic load of this class is zero. Accordingly, in the concepts as illustrated herein dimensioning of the resource contingents for enabling to meet the guarantee is performed for different scenarios, including scenarios where one or more of the classes of sporadic data traffic is empty and scenarios where these classes of sporadic data traffic are not empty. In the latter case, the data packets of one or more of the other classes can be treated as non-guaranteed data packets. However, in scenarios one or more of the classes of sporadic data traffic is empty, the data packets of at least one of these other classes may rather be treated as guaranteed data packets, by configuring a correspondingly dimensioned resource contingent for the at least one other class, at the cost of the resource contingent if the empty class of sporadic data traffic. Accordingly, it becomes possible to temporarily treat the data packets of more classes as guaranteed data packets, without requiring significant extra reservation of resources. According to an example, the following classes of data traffic could be defined:
Class 1 : Critical Sporadic, traffic type 1 ,
Class 2: Critical Sporadic, traffic type 2,
Class 3: Very critical continuous,
Class 4: Normally critical continuous,
Class 5: Less critical continuous. Depending on whether one or more of the classes of sporadic traffic is empty, the data packets of one or more of the other classes may be treated as guaranteed data packets. For this purpose, different dimensioning groups are defined which consist of the classes where the data packets are to be treated as guaranteed data packets. Each dimensioning group applies to a specific scenario of some of the classes being empty. In the illustrated example, dimensioning groups could be defined as follows:
Dimensioning group 1 : Class 1 , Class 2, Class 3,
Dimensioning group 2: Class 1 , Class 3, Class 4,
Dimensioning group 3: Class 2, Class 3, Class 4,
Dimensioning group 4: Class 3, Class 4, Class 5.
In this example, dimensioning group 1 applies to the scenario where neither class 1 nor class 2 is empty. Dimensioning group 2 applies to the scenario where class 2 is empty, but class 1 is not empty. Dimensioning group 3 applies to the scenario where class 1 is empty, but class 2 is not empty. Dimensioning group 4 applies to the scenario where both class 1 and class 2 are empty. As can be seen, in this example each dimensioning group includes the same number of classes, which means that substantially the same amount of resources needs to be reserved in order to meet the guarantee. However, it is noted that in some scenarios the number of classes in the dimensioning groups could also vary if the classes vary with respect to the required amount of resources which needs to be reserved to meet the guarantee, e.g., due to different expected traffic load. As a general rule, a maximum required resource reservation may be determined by the largest required resource reservation of the individual dimensioning groups. The dimensioning of the resource contingents for each dimensioning group is performed under the assumption that only the data packets of the classes in the dimensioning group need to be treated as guaranteed data packets. For each of the classes, a corresponding resource contingent required to meet the guarantee for the data packets of the class may be configured as explained in connection with Figs. 3 to 6. In the illustrated example, class 1 and class 2 are classes of critical sporadic data traffic and are considered to be more critical than the other classes. Due to the data traffic of these classes being sporadic, it is rarely transmitted and has time intervals where the traffic load is zero. However, if the traffic load of class 1 or class 2 is non-zero, it is important that the data packets of the data traffic are not lost and not excessively delayed, which is achieved by treating the data packets as guaranteed data packets. In the illustrated example, also the data traffic of class 3, class 4, and class 5 is critical so that it is important that the data packets of the data traffic are not lost and not excessively delayed, which in some cases may be achieved by treating the data packets as guaranteed data packets. For class 3, which is considered to have the most critical data traffic among classes 3, 4, and 5, the data packets are always treated as guaranteed data packets, irrespective of class 1 or class 2 being empty. Class 4 is considered to have data traffic which is less critical than the data traffic of class 3 but more critical than the data traffic of class 5. The data packets of class 4 are treated as guaranteed data packets is at least one of class 1 and class 2 is empty. For class 5, which is considered to have the least critical data traffic among classes 3, 4, and 5, the data packets are treated as guaranteed data packets only if those class 1 and class 2 are empty. When using a scheduler architecture as illustrated in Fig. 3, one or more corresponding token buckets may be assigned to each of the classes and applied by the interleaved shaper 330 for outputting the data packets. These token buckets may correspond to the per-flow token buckets TBi as explained in connection with Fig. 4. Here, it is noted that depending on the definition of the classes a set of one or more per-flow token buckets TBi could be assigned to each class. The data packets of the other classes may be treated as non-guaranteed data packets and dropped if there are insufficient available excess resources.
It is noted the above way of defining classes is only exemplary and that in other examples more or less classes and/or other kinds of classes could be defined. Further, also the dimensioning groups could be defined in a different way, depending on the individual requirements of the classes.
In a more general context, the classes may be defined by identifying flows of critical sporadic data traffic and assigning these flows to one or more classes. Multiple classes of critical sporadic data traffic could be defined by distinguishing the flows depending on the underlying traffic types, e.g., utilized protocols, and/or depending on a service or category of service to which the data traffic relates. Further, flows of critical continuous data traffic may be identified and assigned to one or more classes. Multiple classes of critical continuous data traffic may be defined by distinguishing the flows based on the level of criticality to loss or excessive delay of data packets. In addition, also the classes of critical continuous data traffic could be distinguished in terms of the underlying traffic types, e.g., utilized protocols, and/or depending on a service or category of service to which the data traffic relates.
Fig. 7 illustrates an exemplary method which may be applied for defining the dimensioning groups. The method of Fig. 7 may for example be implemented by the management node 150 of Fig. 1 . However, if dimensioning of the resource contingents for the guaranteed transmission is accomplished by a node which is involved in the transmission of the data traffic, such as one of the traffic sources 1 10 or the node 120, also the definition of the dimensioning groups according to the method of Fig. 7 could be implemented by this node.
At step 710, a first dimensioning groups is defined. Typically, this first dimensioning group would be defined to include all classes for which the data packets are to be treated as guaranteed data packets in all scenarios. The first dimensioning group may include all classes of critical sporadic data traffic and optionally also one or more classes of critical continuous data traffic, which loss of data packets or excessive delay of data packets is considered to be intolerable. In the above example, this first dimensioning group would correspond to dimensioning group 1 , which includes the classes 1 , 2, 3.
At step 720, one or more other dimensioning groups are defined which exclude at least one of the classes with critical sporadic traffic. In the above example, these dimensioning groups would correspond to dimensioning group 2, which excludes class 2, dimensioning group 3, which includes class 1 , and dimensioning group 4, which excludes class 1 and class 2.
At step 730, the resource contingent configurations may be determined for each of the dimensioning groups. This may involve determining a per-class resource contingent for each of the classes where the data packets are to be treated as guaranteed data packets. The per-class resource contingents may be determined as explained above on the basis of worst case delay calculations.
At step 740, it is checked if capacity requirements can be met for all dimensioning groups. That is to say, in some scenarios it may turn out that for one of the dimensioning groups there is not sufficient resource capacity to support the determined resource contingent configurations. If this is the case, the method may proceed to step 750, as indicated by branch "N".
At step 750, the dimensioning group where the resource contingent configurations have the highest resource demand is modified. This may for example involve removing a class from the dimensioning group. Typically, this would be the class where data traffic is considered to be most tolerant with respect to loss or excessive delay of data packets. It is also possible to replace this class with some other class having potentially lower resource demand. The method may then return to step 730 to again determine the resource contingent configurations for each of the dimensioning groups.
If at step 740 the capacity requirement is found to be met, the method may proceed to step 760, as indicated by branch "Y". At step 760, the determined resource contingent configurations may be applied. If the resource contingent configurations are determined by a management node, such as the management node 150, the management node may indicate the resource contingent configurations to one or more nodes involved in the transmission of the data traffic, such as to the traffic sources 1 10 or to the node 120. These nodes may then store the resource contingent configurations for each of the different dimensioning groups and select the appropriate resource contingent configurations depending on which of the classes of sporadic critical data traffic is empty. If the resource contingent configurations are determined by a node which itself is involved in the transmission of the data traffic, the node may directly store the resource contingent configurations for the different dimensioning groups.
As can be seen, the method of Fig. 7 may be applied for efficiently predetermining resource contingent configurations. Depending on whether one or more of the classes of critical sporadic data traffic is empty, these predetermined resource may then be selected and applied by a node which transmits the data traffic.
Fig. 8 illustrates an exemplary method which may be applied for by a node which transmits the data traffic. The method of Fig. 8 may for example be applied by the traffic sources 1 10 or the node 120. The method of Fig. 8 may be performed on the basis of predetermined resource contingent configurations as for example obtained by the method of Fig. 7.
At step 810, the node determines which classes are empty. This may be accomplished on the basis of a fill level of a per-class resource contingent configured for each of the classes. For example, such per-class resource contingent could be defined by one or more token buckets, and the class could be determined as being empty if the token bucket is filled beyond a threshold or full. Such threshold could for example be defined by the minimum size bi of the token bucket as explained in connection with Fig. 5.
At step 820, one or more classes where the data packets are to be treated as guaranteed data packets are selected. This is accomplished depending on the classes found to be empty at step 810. Specifically, if one or more classes as sporadic critical data traffic are found to be empty at step 810, there is currently no need for treating data packets of these classes as guaranteed data packets. Accordingly, the data packets of one or more other classes may be instead treated as guaranteed data packets. Each possible combination of empty classes may be mapped to a corresponding set of classes where the data packets are to be treated as guaranteed data packets. Such set of classes may for example correspond to one of the above-mentioned dimensioning groups. At step 830, a resource contingent configuration is selected for the classes where the data packets are to be treated as guaranteed data packets. For example, the resource contingent configuration may be selected which was determined for the dimensioning group corresponding to the set of classes as determined at step 820. The data traffic may then be transmitted on the basis of the selected resource contingent configuration.
At step 840, it may be checked if there was a change of the empty classes. For example, one or more further classes may have become empty, which may be detected as explained in connection with step 810 on the basis of the fill level of the per-class resource contingents. Further, one or more classes which were found to be empty may again have non-zero traffic load. The latter case may as well be detected on the basis of the per-class resource contingents. For example, even if a class is found to be empty, a per-class resource contingent could be configured for this class, however with a smaller maximum size than needed to enable treatment of the data packets of this class as guaranteed data packets. For example, such smaller resource contingent based on a token bucket for non-guaranteed data traffic as explained in connection with Fig. 4. A class may be defined to include multiple flows. In this case, the per-class resource contingent may include a corresponding per-flow token bucket for each of the multiple flows, such as the above-mentioned per-flow token buckets 410. The class may then be determined to be empty if all of the per-flow token buckets of the class are full.
As long as the per-class resource contingent remains full, the class may be considered as still being empty. However, if the per-class resource contingent is no longer full, the class may be determined as being no longer empty. In response to detecting a change of the empty classes, the method may return to step 810 to reassess which classes are empty and reselect the classes for which the data packets are to be treated as guaranteed data packets, as indicated by branch "Y". If no change of the empty classes is found, the check of step 840 may be repeated, e.g., according to a periodic pattern or in response to a triggering event. For example, the check of step 840 could be triggered at each transmission of a data packet.
Fig. 9 shows a flowchart for illustrating a method of handling data traffic in a data network. The method of Fig. 9 may be utilized for implementing the illustrated concepts in a node of the data network which transmits the data traffic, such as one of the above-mentioned nodes 1 10, 120. If a processor-based implementation of the node is used, the steps of the method may be performed by one or more processors of the node. In such a case the node may further comprise a memory in which program code for implementing the below described functionalities is stored. In the method of Fig. 9, it is assumed that data packets can be treated either as a guaranteed data packet which is subject to a guarantee that the data packet is not dropped and not delayed by more than a certain delay limit or as a non-guaranteed data packet which is not subject to the guarantee, e.g., by handling the data packet as explained in connection with Fig. 3 to 6.
At step 910, the node classifies the data packets into multiple different classes. This may be accomplished according to criticality of delay of the data packet and/or loss of the data packet. For example, the classes may include a first set of one or more classes with data packets of sporadic traffic which is critical to delay and/or loss of data packet, such as the classes 1 and 2 of above-mentioned example. Further, the classes may include a second set of one or more classes with data packets of continuous data traffic which is less critical to delay and/or loss of data packet than the sporadic traffic of the at least one first class, such as the classes 3, 4, and 5 of above-mentioned example.
At step 920, the node determines those of the classes which have zero traffic load. For example, for each of the classes a per-class resource contingent for transmission of the data packets of the class may be configured. Depending on the fill level of the respective per-class resource contingent, the node may then determine whether the class has zero traffic load. If the per-class resource contingent is based on at least one token bucket, and the node may determine the class as having zero traffic load when the at least one token bucket is full. A class may be defined to include multiple flows. In this case, the per-class resource contingent may include a corresponding per-flow token bucket for each of the multiple flows. The class may then be determined to have zero traffic load if all of the per- flow token buckets of the class are full.
At step 930, the node decides, for at least one of the other classes, between transmitting the data packets of the class as guaranteed data packets and transmitting the data packets as non-guaranteed data packets which are not subject to the guarantee. This is accomplished depending on the classes which at step 920 were found to have zero traffic load.
At step 940, the node may configure resource contingents for transmission of the data packets. For the data classes of which the data packets are to be treated as guaranteed data packets, this may be accomplished based on a worst case calculation of a delay experienced by a data packet. The resource contingent may be configured with a maximum amount of resources which is at least equal to a minimum amount of resources required to meet the guarantee. In some scenarios, the resource contingent may be configured with a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee. The non-guaranteed data packets may then be transmitted based on excess resources in excess of the minimum amount required to meet the guarantee, e.g., as explained in connection with Figs. 3 to 6.
The resource contingent for transmission of the guaranteed data packets may be configured on the basis of a predetermined resource contingent configuration. For example, such predetermined configuration may have been obtained before by the method of Fig. 7.
Fig. 10 shows a block diagram for illustrating functionalities of a network node 1000 which operates according to the method of Fig. 9. As illustrated, the network node 1000 may optionally be provided with a module 1010 configured to classify data packets into multiple different classes, such as explained in connection with step 910. Further, the network node 1000 may be provided with a module 1020 configured to determine one or more classes having zero traffic load, such as explained in connection with step 920. Further, the network node 1000 may be provided with a module 1030 configured to decide the treatment of the data packets of one or more other classes, such as explained in connection with step 930. Further, the network node 1000 may be provided with a module 1040 configured to configure resource contingents, such as explained in connection with step 940.
It is noted that the network node 1000 may include further modules for implementing other functionalities, such as known functionalities of switch, router, or gateway for a data network. Further, it is noted that the modules of the network node 1000 do not necessarily represent a hardware structure of the network node 1000, but may also correspond to functional elements, e.g., implemented by hardware, software, or a combination thereof.
It is noted, that in some scenarios the configuration of the resource contingents as described in connection with step 940 and module 1040 could also be performed by some other node, e.g., another node involved in the transmission of the data packets on the redundant links, or by a separate management node, such as the management node 150. Such a node might then not need to perform the other illustrated steps, e.g., steps 910, 920, or 930, and would not need to be provided with the other illustrated modules, e.g., modules 1010, 1020, or 1030.
Fig. 1 1 shows a flowchart for illustrating a method of handling data traffic in a data network. The method of Fig. 1 1 may be utilized for implementing the illustrated concepts in a node of the data network which is responsible for determining resource contingent configurations, such as one of the above-mentioned management node 150. However, it is noted that the method could also be implemented by a node which actually transmits the data traffic, such as one of the nodes 1 10, 120. If a processor-based implementation of the node is used, the steps of the method may be performed by one or more processors of the node. In such a case the node may further comprise a memory in which program code for implementing the below described functionalities is stored.
In the method of Fig. 1 1 , it is assumed that data packets can be treated either as a guaranteed data packet which is subject to a guarantee that the data packet is not dropped and not delayed by more than a certain delay limit or as a non-guaranteed data packet which is not subject to the guarantee, e.g., by handling the data packet as explained in connection with Fig. 3 to 6.
At step 1 1 10, the node determines multiple different classes of data packets. The classes may be defined according to criticality of delay of the data packet and/or loss of the data packet. For example, the classes may include a first set of one or more classes with data packets of sporadic traffic which is critical to delay and/or loss of data packet, such as the classes 1 and 2 of above-mentioned example. Further, the classes may include a second set of one or more classes with data packets of continuous data traffic which is less critical to delay and/or loss of data packet than the sporadic traffic of the at least one first class, such as the classes 3, 4, and 5 of above-mentioned example.
At step 1 120, the node determines multiple different groups of the classes for which the data packets are to be treated as guaranteed data packets. 14. If the groups include a first set of one or more classes with data packets of sporadic data traffic which is critical to delay and/or loss of data packet, and a second set of one or more classes with data packets of continuous data traffic which is less critical to delay and/or loss of data packet than the sporadic traffic of the at least one first class, the groups may include a first group including at least all the classes of the first set and least one second group including less than all the classes of the first set and at least one class of the second set. Examples of such groups are the above-mentioned dimensioning groups, where dimensioning group 1 would correspond to the first group and dimensioning groups 2, 3, and 4 would correspond to the at least one second group. At step 1 130, the node determines configurations of resource contingents for transmission of the data packets. This is accomplished for each of the groups determined at step 1 120. For the data classes of which the data packets are to be treated as guaranteed data packets, the resource contingent may be determined based on a worst case calculation of a delay experienced by a data packet. The resource contingent may define a maximum amount of resources which is at least equal to a minimum amount of resources required to meet the guarantee. In some scenarios, the resource contingent may define a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee. The non-guaranteed data packets may then be transmitted based on excess resources in excess of the minimum amount required to meet the guarantee, e.g., as explained in connection with Figs. 3 to 6.
The resource contingent configurations may then be applied for transmission of the data traffic, e.g., on the basis of the method of Fig. 9.
Fig. 12 shows a block diagram for illustrating functionalities of a network node 1000 which operates according to the method of Fig. 1 1 . As illustrated, the network node 1200 may optionally be provided with a module 1210 configured to determine multiple different classes of data packets, such as explained in connection with step 1 1 10. Further, the network node 1200 may be provided with a module 1220 configured to determine groups of the classes, such as explained in connection with step 1 120. Further, the network node 1200 may be provided with a module 1230 configured to determine resource contingent configurations, such as explained in connection with step 1 130.
It is noted that the network node 1200 may include further modules for implementing other functionalities, such as known functionalities of a management node, switch, router, or gateway for a data network. Further, it is noted that the modules of the network node 1200 do not necessarily represent a hardware structure of the network node 1200, but may also correspond to functional elements, e.g., implemented by hardware, software, or a combination thereof.
Further, it is noted that the illustrated concepts could also be implemented in a system including a first node operating according to the method of Fig. 9 and a second node operating according to the method of Fig. 1 1. In this case, the second node could determine the configurations of resource contingents to be applied by the first node depending on the classes which are found to have zero traffic load.
Fig. 13 illustrates a processor-based implementation of a network node 1300 which may be used for implementing the above described concepts. The network node 1300 may for example correspond to a switch, router or gateway of the data network, or to a management node. As illustrated, the network node 1300 includes an input interface 1310 and an output interface 1320. The input interface 1310 may be used for receiving data packets, e.g., from other nodes of the data network. The output interface 1320 may be used for transmitting the data packets, e.g., to other nodes of the data network. If the network node 1300 corresponds to a management node, the input interface 1310 and output interface 1320 could also be replaced by one or more management or control interfaces with respect to other nodes of the data network.
Further, the network node 1300 may include one or more processors 1350 coupled to the interfaces 1310, 1320 and a memory 1360 coupled to the processor(s) 1350. By way of example, the interfaces 1310, 1320, the processor(s) 1350, and the memory 1360 could be coupled by one or more internal bus systems of the network node 1300. The memory 1360 may include a Read Only Memory (ROM), e.g., a flash ROM, a Random Access Memory (RAM), e.g., a Dynamic RAM (DRAM) or Static RAM (SRAM), a mass storage, e.g., a hard disk or solid state disk, or the like. As illustrated, the memory 1 160 may include software 1370, firmware 1380, and/or control parameters 1390. The memory 1360 may include suitably configured program code to be executed by the processor(s) 1350 so as to implement the above-described functionalities of a network node, such as explained in connection with Fig. 9 or 1 1 .
It is to be understood that the structures as illustrated in Fig. 13 are merely schematic and that the network node 1300 may actually include further components which, for the sake of clarity, have not been illustrated, e.g., further interfaces or processors. Also, it is to be understood that the memory 1360 may include further program code for implementing known functionalities of a network node, e.g., known functionalities of a switch, router, gateway, or management node. According to some embodiments, also a computer program may be provided for implementing functionalities of the network node 1300, e.g., in the form of a physical medium storing the program code and/or other data to be stored in the memory 1360 or by making the program code available for download or by streaming.
As can be seen, the concepts as described above may be used for efficiently transmitting data traffic in scenarios where the data traffic includes sporadic critical data traffic and other critical data traffic. In particular, by deciding depending on the traffic load of a class of sporadic critical data traffic whether to treat the data packets of one or more other classes as guaranteed data packets, resources can be utilized in an efficient manner without adversely affecting the sporadic critical data traffic. It is to be understood that the examples and embodiments as explained above are merely illustrative and susceptible to various modifications. For example, the illustrated concepts may be applied in connection with various kinds of data networks, without limitation to the above-mentioned example of a transport network part of a wireless communication network. Further, various kinds of alternative mechanisms could be used for providing the guarantee for transmission of the data packets. Further, the illustrated concepts may be applied in various kinds of nodes, including without limitation to the above-mentioned examples of a switch, router, or gateway. Moreover, it is to be understood that the above concepts may be implemented by using correspondingly designed software to be executed by one or more processors of an existing device, or by using dedicated device hardware. Further, it should be noted that the illustrated nodes or devices may each be implemented as a single device or as a system of multiple interacting devices.

Claims

Claims
1 . A method of handling data traffic in a data network, the method comprising:
a node of the data network (1 10, 120; 1000; 1300) classifying data packets (200) into multiple different classes;
the node (1 10, 120; 1000; 1300) determining those of the classes which have zero traffic load; and
depending on the classes which have zero traffic load, the node (1 10, 120; 1000; 1300) deciding for at least one of the other classes between transmitting the data packets (200) of the class as guaranteed data packets (200) which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit and transmitting the data packets (200) as non-guaranteed data packets (200) which are not subject to the guarantee.
2. The method according to claim 1 ,
wherein for each of the classes a per-class resource contingent for transmission of the data packets (200) of the class is configured, and
wherein said determining of the classes having zero traffic load is based on a fill level of the per-class resource contingents.
3. The method according to claim 2,
wherein the per-class resource contingent is based on at least one token bucket; and wherein the class is determined to have zero traffic load when the at least one token bucket is full.
4. The method according to any one of the preceding claims,
wherein based on a worst case calculation of a delay experienced by a data packet (200) a resource contingent for transmission of the guaranteed data packets (200) is configured with a maximum amount of resources which corresponds to at least a minimum amount of resources required to meet the guarantee.
5. The method according to claim 4,
wherein the resource contingent for transmission of the guaranteed data packets (200) is configured with a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee.
6. The method according to any one of claim 5,
wherein the non-guaranteed data packets (200) are transmitted based on excess resources which are in excess of the minimum amount required to meet the guarantee.
7. The method according to any one of claims 4 to 6,
wherein the resource contingent for transmission of the guaranteed data packets (200) is configured on the basis of a predetermined configuration.
8. The method according to any one of the preceding claims,
wherein the data packets (200) are classified according to criticality of delay of the data packet (200) and/or loss of the data packet.
9. The method according to any one of the preceding claims,
wherein the classes comprise:
a first set of one or more classes with data packets (200) of sporadic traffic which is critical to delay and/or loss of data packet, and
a second set of one or more classes with data packets (200) of continuous traffic which is less critical to delay and/or loss of data packet (200) than the sporadic traffic of the at least one first class.
10. A method of handling data traffic in a data network, the method comprising:
a node (1 10, 120; 150, 1000; 1300) of the data network determining multiple different classes of data packets (200);
the node (1 10, 120; 150, 1000; 1300) determining multiple different groups of the classes for which the data packets (200) are to be treated as guaranteed data packets (200) which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit; and
for each of the groups, the node (1 10, 120; 150, 1000; 1300) determining based on a worst case calculation of a delay experienced by a data packet (200) a configuration of a resource contingent for transmission of the guaranteed data packets (200), the configuration defining a maximum amount of resources which corresponds to at least a minimum amount of resources required to meet the guarantee.
1 1. The method according to claim 10,
wherein for each of the groups the configuration the configuration of the resource contingent for transmission of the guaranteed data packets (200) a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee.
12. The method according to claim 1 1 ,
wherein non-guaranteed data packets (200), which are not subject to the guarantee, are transmitted based on excess resources which are in excess of the minimum amount required to meet the guarantee.
13. The method according to any one of claims 10 to 12,
wherein the data packets (200) are classified according to criticality of delay of the data packet (200) and/or loss of the data packet.
14. The method according to any one of claims 10 to 13,
wherein the classes comprise:
a first set of one or more classes with data packets (200) of sporadic traffic which is critical to delay and/or loss of data packet, and
a second set of one or more classes with data packets (200) of continuous traffic which is less critical to delay and/or loss of data packet (200) than the sporadic traffic of the at least one first class.
15. The method according to claim 14,
wherein the groups comprise:
a first group including at least all the classes of the first set, and
at least one second group including less than all the classes of the first set and at least one class of the second set.
16. A node (1 10; 120; 1000; 1300) for a data network, the node (1 10; 120; 1000; 1300) being configured to:
classify data packets (200) into multiple different classes;
determine those of the classes which have zero traffic load; and
depending on the classes which have zero traffic load, deciding for at least one of the other classes between transmitting the data packets (200) of the class as guaranteed data packets (200) which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit and transmitting the data packets (200) as non-guaranteed data packets (200) which are not subject to the guarantee.
17. The node (1 10; 120; 1000; 1300) according to claim 16,
wherein for each of the classes a per-class resource contingent for transmission of the data packets (200) of the class is configured, and
wherein said determining of the classes having zero traffic load is based on a fill level of the per-class resource contingents.
18. The node (1 10; 120; 1000; 1300) according to claim 17,
wherein the per-class resource contingent is based on at least one token bucket; and wherein the class is determined to have zero traffic load when the at least one token bucket is full.
19. The node (1 10; 120; 1000; 1300) according to any one of claims 16 to 18,
wherein based on a worst case calculation of a delay experienced by a data packet (200) a resource contingent for transmission of the guaranteed data packets (200) is configured with a maximum amount of resources which corresponds to at least a minimum amount of resources required to meet the guarantee.
20. The node (1 10; 120; 1000; 1300) according to claim 19,
wherein the resource contingent for transmission of the guaranteed data packets (200) is configured with a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee.
21. The node (1 10; 120; 1000; 1300) according to any one of claim 20,
wherein the non-guaranteed data packets (200) are transmitted based on excess resources which are in excess of the minimum amount required to meet the guarantee.
22. The node (1 10; 120; 1000; 1300) according to any one of claims 19 to 21 ,
wherein the resource contingent for transmission of the guaranteed data packets (200) is configured on the basis of a predetermined configuration.
23. The node (1 10; 120; 1000; 1300) according to any one of claims 16 to 22,
wherein the data packets (200) are classified according to criticality of delay of the data packet (200) and/or loss of the data packet.
24. The node (1 10; 120; 1000; 1300) according to any one of claims 16 to 23,
wherein the classes comprise:
a first set of one or more classes with data packets (200) of sporadic traffic which is critical to delay and/or loss of data packet, and
a second set of one or more classes with data packets (200) of continuous traffic which is less critical to delay and/or loss of data packet (200) than the sporadic traffic of the at least one first class.
25. The node (1 10; 120; 1000; 1300)according to claim 16,
wherein the node (1 10; 120; 1000; 1300) is configured to perform a method according to any one of claims 1 to 9.
26. A node (1 10, 120; 150, 1200; 1300) for a data network, the node (1 10, 120; 150, 1000; 1300) being configured to:
determine multiple different classes of data packets (200);
determine multiple different groups of the classes for which the data packets (200) are to be treated as guaranteed data packets (200) which are subject to a guarantee of not being dropped and not delayed by more than a certain delay limit; and
for each of the groups, determine based on a worst case calculation of a delay experienced by a data packet (200) a configuration of a resource contingent for transmission of the guaranteed data packets (200), the configuration defining a maximum amount of resources which corresponds to at least a minimum amount of resources required to meet the guarantee.
27. The node (1 10, 120; 150, 1200; 1300) according to claim 26,
wherein for each of the groups the configuration the configuration of the resource contingent for transmission of the guaranteed data packets (200) a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee.
28. The node (1 10, 120; 150, 1200; 1300) according to claim 27,
wherein non-guaranteed data packets (200), which are not subject to the guarantee, are transmitted based on excess resources which are in excess of the minimum amount required to meet the guarantee.
29. The node (1 10, 120; 150, 1200; 1300) according to any one of claims 26 to 28, wherein the data packets (200) are classified according to criticality of delay of the data packet (200) and/or loss of the data packet.
30 The node (1 10, 120; 150, 1200; 1300) according to any one of claims 26 to 29, wherein the classes comprise:
a first set of one or more classes with data packets (200) of sporadic traffic which is critical to delay and/or loss of data packet, and
a second set of one or more classes with data packets (200) of continuous traffic which is less critical to delay and/or loss of data packet (200) than the sporadic traffic of the at least one first class.
31. The node (1 10, 120; 150, 1200; 1300) according to claim 30,
wherein the groups comprise:
a first group including at least all the classes of the first set, and
at least one second group including less than all the classes of the first set and at least one class of the second set.
32. The node (1 10, 120; 150, 1200; 1300) according to claim 26,
wherein the node (1 10, 120; 150, 1200; 1300) is configured to perform a method according to any one of claims 10 to 15.
33. A computer program comprising program code to be executed by at least one processor (1350) of a node (1 10; 120; 150; 1000; 1200; 1300) for a data network, wherein execution of the program code causes the node (1 10; 120; 150; 1000; 1300) to perform the steps of a method according to any one of claims 1 to 15.
34. A computer program comprising program code to be executed by at least one processor (1350) of a node (1 10; 120; 150; 1000; 1200; 1300) for a data network, wherein execution of the program code causes the node (1 10; 120; 150; 1000; 1200; 1300) to perform the steps of a method according to any one of claims 1 to 15.
PCT/EP2016/076823 2016-11-07 2016-11-07 Efficient handling of loss and/or delay sensitive sporadic data traffic WO2018082788A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2016/076823 WO2018082788A1 (en) 2016-11-07 2016-11-07 Efficient handling of loss and/or delay sensitive sporadic data traffic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2016/076823 WO2018082788A1 (en) 2016-11-07 2016-11-07 Efficient handling of loss and/or delay sensitive sporadic data traffic

Publications (1)

Publication Number Publication Date
WO2018082788A1 true WO2018082788A1 (en) 2018-05-11

Family

ID=57256294

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/076823 WO2018082788A1 (en) 2016-11-07 2016-11-07 Efficient handling of loss and/or delay sensitive sporadic data traffic

Country Status (1)

Country Link
WO (1) WO2018082788A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108848532A (en) * 2018-06-06 2018-11-20 Oppo广东移动通信有限公司 A kind of data transfer optimization method, apparatus and computer storage medium
WO2022060282A1 (en) * 2020-09-17 2022-03-24 Telefonaktiebolaget Lm Ericsson (Publ) Systems and methods for ppv information in ethernet, ipv4, ipv6, and mpls packet/frame headers

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010053149A1 (en) * 2000-05-05 2001-12-20 Li Mo Method and system for quality of service (QoS) support in a packet-switched network
US20040213265A1 (en) * 2003-04-24 2004-10-28 France Telecom Method and a device for implicit differentiation of quality of service in a network
US20070104102A1 (en) * 2005-11-10 2007-05-10 Broadcom Corporation Buffer management and flow control mechanism including packet-based dynamic thresholding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010053149A1 (en) * 2000-05-05 2001-12-20 Li Mo Method and system for quality of service (QoS) support in a packet-switched network
US20040213265A1 (en) * 2003-04-24 2004-10-28 France Telecom Method and a device for implicit differentiation of quality of service in a network
US20070104102A1 (en) * 2005-11-10 2007-05-10 Broadcom Corporation Buffer management and flow control mechanism including packet-based dynamic thresholding

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108848532A (en) * 2018-06-06 2018-11-20 Oppo广东移动通信有限公司 A kind of data transfer optimization method, apparatus and computer storage medium
CN108848532B (en) * 2018-06-06 2022-01-28 Oppo广东移动通信有限公司 Data transmission optimization method and device and computer storage medium
WO2022060282A1 (en) * 2020-09-17 2022-03-24 Telefonaktiebolaget Lm Ericsson (Publ) Systems and methods for ppv information in ethernet, ipv4, ipv6, and mpls packet/frame headers
EP4214915A4 (en) * 2020-09-17 2024-03-27 Ericsson Telefon Ab L M Systems and methods for ppv information in ethernet, ipv4, ipv6, and mpls packet/frame headers

Similar Documents

Publication Publication Date Title
CN104902518B (en) The system and method for realizing reflective EPS carrying
EP2929660B1 (en) Output queue latency behavior for input queue based device
US11824747B2 (en) Enhanced network communication using multiple network connections
US10897516B2 (en) Application buffering of packets by fog computing node for deterministic network transport
US20090201809A1 (en) Method and system for controlling link saturation of synchronous data across packet networks
CN111756633B (en) Generating an automatic bandwidth adjustment policy from a label switched path
US10911348B2 (en) Transmission of guaranteed and non-guaranteed data packets on redundant links
US9515940B2 (en) Method for transmitting data in a packet-oriented communications network and correspondingly configured user terminal in said communications network
US20080298397A1 (en) Communication fabric bandwidth management
WO2020210780A1 (en) Chunk based network qualitative services
CN113162789A (en) Method, device, equipment, system and storage medium for adjusting service level
US10491719B2 (en) Insertion of management packet into a wired deterministic path
CN102611630A (en) Message receiving control method and system
WO2018082788A1 (en) Efficient handling of loss and/or delay sensitive sporadic data traffic
US10764191B2 (en) Device and method for managing end-to-end connections
US8660001B2 (en) Method and apparatus for providing per-subscriber-aware-flow QoS
Klymash et al. Data Buffering Multilevel Model at a Multiservice Traffic Service Node
US10412011B2 (en) Delay requirement aware packet forwarding control
US11012378B2 (en) Methods and apparatus for shared buffer allocation in a transport node
US11258712B2 (en) Resource efficient forwarding of guaranteed and non-guaranteed data packets
KR102025426B1 (en) Traffic control method and apparatus for solving service quality degradation according to traffic overhead in sdn-based communication node
US10594593B2 (en) Methods and apparatus for transmitting data
US20190044847A1 (en) Method for transmitting data in a multipath communication
Mohamad et al. Control Dynamic System and Qos Manager Agent Over Ipv6 Networks: Intserv and Diffserv Approach in Access Nodes
CN117336180A (en) Method, device, equipment and storage medium for deploying packet replication strategy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16793838

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16793838

Country of ref document: EP

Kind code of ref document: A1