WO2020191014A1 - Latency based forwarding of packets with dynamic priority destination policies - Google Patents

Latency based forwarding of packets with dynamic priority destination policies Download PDF

Info

Publication number
WO2020191014A1
WO2020191014A1 PCT/US2020/023289 US2020023289W WO2020191014A1 WO 2020191014 A1 WO2020191014 A1 WO 2020191014A1 US 2020023289 W US2020023289 W US 2020023289W WO 2020191014 A1 WO2020191014 A1 WO 2020191014A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
packets
node
delay
network
Prior art date
Application number
PCT/US2020/023289
Other languages
French (fr)
Inventor
Toerless Eckert
Alexander Clemm
Original Assignee
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurewei Technologies, Inc. filed Critical Futurewei Technologies, Inc.
Publication of WO2020191014A1 publication Critical patent/WO2020191014A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/121Shortest path evaluation by minimising delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/18End to end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2416Real-time traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/564Attaching a deadline to packets, e.g. earliest due date first

Definitions

  • the method also includes: determining, by the node, for each of the packets a static priority for the packet based at least on the packet’s maximum delay and the packet’s updated accumulated delay; determining, by the node, for each of the packets a queueing rank for the packet based at least on the maximum delay; maintaining by the node a plurality ranked queues of packets for transmission from the node, the plurality of ranked queues being ranked based upon the static priority of packets enqueued therein; and entering, by the node, each of the packets into one of the ranked queues.
  • a system for transmitting packets from a sending network device to a receiving network device includes one or more nodes connectable in series to transfer a plurality of packets from the sending network device to the receiving network device.
  • Each of the nodes comprises: a network interface configured to receive and forward the packets over the network, each of the packets including a network header, indicating the receiving network device, and a forwarding header, indicating an accumulated delay experienced by the packet since being transmitted by the sending network device and a maximum latency for the transfer of the packet from the sending network device to the receiving network; one or more queues configured to store packets to forward over the network; and one or more processors coupled to the one or more queues and the network interface.
  • the one or more processors are further configured to: for each packet, determine a static priority from the packet’s maximum delay and the packet’s updated accumulated delay; rank the plurality queues based upon the static priority of packets enqueued therein; for each packet, determine a queueing rank from the maximum delay, enter each of the packets into one of the ranked queues by determining, based the packet’s static priority, into which of the ranked queues to enter the packet and entering the packet into the determined one of the ranked queues based on the determined queueing rank for the packet.
  • the one or more processors for each of the nodes are further configured to: receive a number of hops from the node to the receiving network device , and wherein the maximum delay for each of the packets is further determined based on the number of hops.
  • FIG. 2 illustrates an exemplary network of a series of nodes, such as routers, which may be included in one of the networks shown in FIG. 1.
  • FIG.3 is flowchart for one embodiment for the latency based forwarding of packets as illustrated in the example FIG. 2.
  • FIG. 4 is a schematic diagram illustrating exemplary details of a network device, or node, such as shown in the network of FIG. 2.
  • FIG. 6 is a high level overview for an embodiment of end-to-end latency based forwarding.
  • FIG. 7 considers the node behavior for a pair of nodes from FIG. 6 in more detail.
  • FIG. 10 is a schematic diagram of multiple nodes connected to serve as inputs to a single node, which can lead to burst accumulation of packets at the receiving node.
  • FIGs. 12 and 13 are a schematic representation of an embodiment for latency based forwarding of packets using strict priority destination policy.
  • FIG. 14 is a flowchart of one embodiment of the operation of latency based forwarding that can include destination policies and also strict priority destination policies.
  • FIG. 15 is a schematic representation of how a sequence of packets with higher static priorities can cause a low static priority packet to be displaced from being forwarded.
  • FIG. 16 illustrates the use of a latency based forwarding embodiment using dynamic priority destination to more “fairly” forward packets with different static latencies.
  • High-precision networks demand high-precision service-level guarantees that can be characterized through a set of Service Level Objectives (SLOs), which are performance goals for a service under certain well-defined constraints.
  • SLOs Service Level Objectives
  • Examples of applications where in-time guarantees can be of use are in Virtual Reality/Augmented Reality (VR/AR), which can have stringent limits on the maximum motion-to-photon time, such as to avoid dizziness and reduced quality of experience that can result from longer delays and may severely reduce user acceptance.
  • VR/AR Virtual Reality/Augmented Reality
  • Another example is for Tactile Internet having stringent limits to delay for haptic feedback, as a lack of sensation of being in control or sluggish control would make many applications infeasible.
  • Further examples can include industrial controllers, that can have stringent limits to feedback control loops, and applications such as vehicle to everything (V2X), remote-controlled robots and drones, and similar cases.
  • V2X vehicle to everything
  • the techniques presented in the following discussion provide a system that delivers packets that traverse a network in accordance with a quantified delay SLO.
  • the SLO indicates a delay range with quantifiable lower and upper bounds that can be varied for each individual packet.
  • Previous networking technologies do not provide this capability, but are instead typically engineered to“minimize” delay by using a range of techniques ranging from dimensioning links to reserving resources and performing admission control functions. These previous approaches are not engineered to hit a specific quantified delay or target, and there is no networking algorithm that would hit that delay as part of a function of the network itself.
  • FIG. 1 illustrates an exemplary communication system 100 with which embodiments of the present technology can be used.
  • the communication system 100 includes, for example, user equipment 1 10A, 1 10B, and 1 10C, radio access networks (RANs) 120A and 120B, a core network 130, a public switched telephone network (PSTN) 140, the Internet 150, and other networks 160.
  • RANs radio access networks
  • PSTN public switched telephone network
  • Additional or alternative networks include private and public data-packet networks, including corporate intranets. While certain numbers of these components or elements are shown in the figure, any number of these components or elements may be included in the system 100.
  • the communication system 100 can include a wireless network, which may be a fifth generation (5G) network including at least one 5G base station which employs orthogonal frequency-division multiplexing (OFDM) and/or non- OFDM and a transmission time interval (TTI) shorter than 1 milliseconds (e.g. 100 or 200 microseconds), to communicate with the communication devices.
  • 5G fifth generation
  • a base station may also be used to refer to any of the eNB and the 5G BS (gNB).
  • the network may further include a network server for processing information received from the communication devices via the at least one eNB or gNB.
  • System 100 enables multiple users to transmit and receive data and other content.
  • the system 100 may implement one or more channel access methods, such as but not limited to code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), or single-carrier FDMA (SC-FDMA).
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal FDMA
  • SC-FDMA single-carrier FDMA
  • Each UE 1 10 represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device, wireless transmit/receive unit (UE), mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, wireless sensor, wearable devices, consumer electronics device, device-to-device (D2D) user equipment, machine type user equipment or user equipment capable of machine-to-machine (M2M) communication, iPads, Tablets, mobile terminals, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, or other non-limiting examples of user equipment or target device.
  • PDA personal digital assistant
  • D2D device-to-device
  • M2M machine type user equipment or user equipment capable of machine-to-machine
  • M2M machine-to-machine
  • iPads, Tablets mobile terminals
  • laptop embedded equipped (LEE) laptop mounted equipment
  • USB dongles or other non-limiting examples of user equipment or target device
  • the RANs 120A, 120B include one or more base stations (BSs) 170A, 170B, respectively.
  • the RANs 120A and 120B can be referred to individually as a RAN 120, or collectively as the RANs 120.
  • the base stations (BSs) 170A and 170B can be referred to individually as a base station (BS) 170, or collectively as the base stations (BSs) 170.
  • Each of the BSs 170 is configured to wirelessly interface with one or more of the UEs 1 10 to enable access to the core network 130, the PSTN 140, the Internet 150, and/or the other networks 160.
  • the base stations (BSs) 170 may include one or more of several well-known devices, such as a base transceiver station (BTS), a Node-B (NodeB), an evolved NodeB (eNB), a next (fifth) generation (5G) NodeB (gNB), a Home NodeB, a Home eNodeB, a site controller, an access point (AP), or a wireless router, or a server, router, switch, or other processing entity with a wired or wireless network.
  • BTS base transceiver station
  • NodeB Node-B
  • eNB evolved NodeB
  • 5G NodeB gNB
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • the BS 170B forms part of the RAN 120B, which may include one or more other BSs 170, elements, and/or devices.
  • Each of the BSs 170 operates to transmit and/or receive wireless signals within a particular geographic region or area, sometimes referred to as a“cell.”
  • multiple-input multiple-output (MIMO) technology may be employed having multiple transceivers for each cell.
  • the BSs 170 communicate with one or more of the UEs 1 10 over one or more air interfaces (not shown) using wireless communication links.
  • the air interfaces may utilize any suitable radio access technology.
  • the delay for a packet to travel between node 210a and node 210b is 1 ms and the delays between node 210b and node 210c, between node 210c and node 21 Od, and between node 21 Od and 21 Oe are all 500ps.
  • the control/management plane 212 can notify each node of the number of remaining nodes, or hops, towards each possible destination and the predicted non-queuing propagation latency towards each destination calculated by adding up the non-queuing propagation latency of all hops from this node to a the destination. As described in more detail below, based upon this information and the amount of delay that the packet has experienced so far, the node can determine a local latency budget for the packet in the node.
  • the local latency budget of 21 Od becomes:
  • latency-in-packet corresponds to the cumulative amount of delay or latency already experienced by the packet since leaving its source
  • path- delay-to-destination is the expected amount of fixed transmission delay before the packet reaches its destination node.
  • the packet can also carry information on the accumulated delay metadata, such as the amount of accumulated delay or latency experienced by the packet so far since it left the sending node. (In much of the following, latency and delay are used interchangeably in this context.)
  • the node assesses the delay and can also update the delay before passing the packet on to the next node.
  • the accumulated delay metadata can be a timestamp, where the accumulated delay can be assessed based on the difference between the current time and packet’s starting time, where the packet can carry its sending time (as a timestamp) the delay can be obtained by subtracting the sent time from the received time. This embodiment uses network time synchronization, but can keep the packet contents unaltered.
  • the packet can be changed to update the cumulative latency, where this approach does not require the synchronization of time across the different nodes.
  • the node can instead update the remaining SLO.
  • the node can determine the delay budget for the packet. As illustrated in FIG. 2, where the information on the number of remaining hops and predicted propagation delay is provided by the control/management plane 212, a path predictor can also supply path propagation and delay predictions, such as the number of hops to the destination and the fixed delays between these hops. With respect to receiving the number of hops and fixed delays, a node can access this information in various ways in order to make better latency based forwarding decisions depending on the embodiment. Depending on the embodiment, this information can be stored/maintained on the node itself.
  • this information can be configured/provisioned using a control or management application. In other embodiments, it can be communicated using a control plane protocol such as IGP (internet gateway protocol). In general, this information can be communicated/received separately from the packet itself, and can involve a different mechanism. In one set of embodiments, a node can receive, or be aware, of this information by way of a forwarding information database (FIB) from where it is disseminated using a separate control plane mechanism (IGP, provisioning at the node via a controller, etc.).
  • FIB forwarding information database
  • the assessment of 307 can be based on the inputs of the remaining path information of: number of nodes remaining; the fixed delay for remainder of path, which can be computed from the number of links remaining with propagation delay and the possible number of nodes with fixed minimum processing delay; and receive information precomputed by the control/management plane 212 and disseminated along with path information by the control/management plane 212.
  • the output of the assessment of 307 is the delay budget.
  • the fixed latencies and the current delay can be subtracted from the SLO, which can then be divided by the number of remaining nodes, as described above with respect to the local delay budgets of the nodes 210b, 210c, and 21 Od in FIG. 2.
  • the target latency or delay at the node can be based on the midpoint between lower bound and upper bound as determined from the packet’s SLO at 303.
  • the fixed latencies and the current delay can be subtracted from the SLO, which can then be divided by the number of remaining nodes, as described above with respect to the local delay budgets of the nodes 210b, 210c, and 21 Od in FIG. 2. The assessment of delay budgets is described in more detail below.
  • the node can take a quality of service (QoS) action. For example, the node can maintain one or more queues in which it places packets ready for forwarding and then select a queue and a placement within the queue whose expected delay is the closest match for the packet’s target delay budget (e.g., the first queue whose delay is less than or equal to the target delay).
  • target delay budget e.g., the first queue whose delay is less than or equal to the target delay.
  • the node can assess a queue’s latency as a function of queue occupancy, as well as other options, such as through the use of defined delay queues, for example. If the target delay budget is negative, a packet will miss its SLO.
  • the node could: discard or drop the packet; mark the packet as late, so that nodes downstream no longer need to prioritize the packet; or record an SLO violation in a statelet (e.g. update counter) of the packet.
  • the QoS action could include speeding up or slowing down a packet, or forwarding along a slower vs a faster path.
  • the packet is forwarded on the next node of its path. For example, after being entered into a queue based on its delay budget at 308, the packet would work its way up the queue until it is transmitted over the network.
  • FIG. 4 is a schematic diagram illustrating exemplary details of a node 400, such as a router, switch, server or other network device, according to an embodiment.
  • the node 400 can correspond to one of the nodes 210a, 210b, 210c, 21 Od, or 21 Oe of FIG. 2.
  • the router or other network node 400 can be configured to implement or support embodiments of the present technology disclosed herein.
  • the node 400 may comprise a number of receiving input/output (I/O) ports 410, a receiver 412 for receiving packets, a number of transmitting I/O ports 430 and a transmitter 432 for forwarding packets. Although shown separated into an input section and an output section in FIG.
  • I/O ports 410 and 430 that are used for both down stream and up-stream transfers and the receiver 412 and transmitter 432 will be transceivers.
  • I/O ports 410, receiver 412, I/O ports 430, and transmitter 432 can be collectively referred to as a network interface that is configured to receive and transmit packets over a network.
  • the node 400 can also include a processor 420 that can be formed of one or more processing circuits and a memory or storage section 422.
  • the storage 422 can be variously embodied based on available memory technologies and in this embodiment is shown to have a cache 424, which could be formed from a volatile RAM memory such as SRAM or DRAM, and long-term storage 426, which can be formed of non-volatile memory such as flash NAND memory or other memory technologies.
  • Storage 422 can be used for storing both data and instructions for implementing the packet forwarding techniques described here.
  • the processor(s) 420 can be configured to implement embodiments of the present technology described below.
  • the memory 422 stores computer readable instructions that are executed by the processor(s) 420 to implement embodiments of the present technology. It would also be possible for embodiments of the present technology described below to be implemented, at least partially, using hardware logic components, such as, but not limited to, Field- programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc.
  • FPGAs Field- programmable Gate Arrays
  • ASICs Application-specific Integrated Circuits
  • ASSPs Application-specific Standard Products
  • SOCs System-on-a-chip systems
  • CPLDs Complex Programmable Logic Devices
  • FIG. 5 provides an example of a network in which latency based forwarding of packets can be implemented. More specifically, FIG. 5 illustrates an aggregation ring, which is a common metropolitan broadband/mobile-access topology. In the example of FIG.
  • each of six ring routers (RA 501 , RB 503, RC 505, RD 507, RE 509, RF 51 1 ) is connected to 100 access nodes, spoke routers (RaO 501 -0 to Ra99 501 -99, RbO 503-0 to Rb99 503-99, RcO 505-0 to Rc99 505-99, RdO 507-0 to Rd99 507-99, ReO 509-0 to Re99 509-99, RfO 51 1 -0 to Rf99 51 1 -99).
  • spoke routers (RaO 501 -0 to Ra99 501 -99, RbO 503-0 to Rb99 503-99, RcO 505-0 to Rc99 505-99, RdO 507-0 to Rd99 507-99, ReO 509-0 to Re99 509-99, RfO 51 1 -0 to Rf99 51 1 -99).
  • a packet is sent form the sending node Ra-0 501 -0 to the receiving node Re-0 509-0, traversing the ring nodes of routers RA 501 , RB 503, RC 505, RD 507, and RE 509.
  • the latency based forwarding introduced here allows for a packet with a lower delay SLO to be de-queued earlier than packets with a higher delay SLO.
  • SLO delay based aproach
  • each hop and queuing of prior hops reduces the acceptable per-hop delay
  • packets which have to cross more ring nodes would experience less per-hop delay in the nodes than those packets with the same SLO but travelling fewer hops.
  • the latency based SLO therefore can provide fairer/more-equal delay across rings independently of how far away in the ring a sender and receivers are located. For example, minimum-delay can be set to be larger than the worst-case “across-ring” delay which results in same delivery latency independent of path in the absence of congestion.
  • dequeuing priorization only considers lb (lower bound SLO).
  • dequeuing will also prioritize packets under observation of ub (upper bound LBF), prioritizing packets with different path-lengths and SLO when under congestion.
  • FIG. 6 is a high level overview for an embodiment of end-to-end latency based forwarding (LBF) 600.
  • latency based forwarding provides a machinery for an end-to-end network consisting of a sending network device, or sender, RS 601 , receiving network device, or receiver, RR 609 and one or more intermediate or forwarding nodes.
  • three intermediate nodes RA 603, RB 605 and RC 607 are shown.
  • the fixed latency for transmission between a pair of nodes is 1 ms, and each of the intermediate nodes adds a delay of an LBF queue latency.
  • the locally incurred latency at the node is added to the total delay incurred so far, so that it can be used by the subsequent node as one of the inputs to make its decision.
  • the total end-to-end latency between the sending router or node RS 601 and the receiving router or node RR 609 is managed.
  • a packet 621 includes a destination header that indicates its destination, RR 609 in this example. This destination header is used by each forwarding node RA 603, RB 605, RC 607 to steer packet 621 to the next forwarding node or final receiver RR 609.
  • a forwarding header edelay, Imin and Imax.
  • the edelay parameter allows for each forwarding node (RA 603, RB 605, RC 607) to determine the difference in time (latency) between when the node receives the packet and when the sender RS 601 has sent the packet.
  • the edelay parameter is the latency, or delay, encountered so far. It is updated at each node, which adds the latency locally incurred so far plus the known outgoing link latency to the next hop.
  • a sender timestamp is added once by the sender RS 601 , where subsequent nodes compute the latency (edelay) incurred so far by subtracting the sending timestamp from the current time.
  • the forwarding nodes RA 603, RB 605, RC 607 do not need to update the field, but this method does require a time-synchronized network. In other alternate embodiments, a desired time of arrival could also be indicated.
  • the parameters Imin and Imax are respectively an end-to-end minimum and maximum latency for the Service Level Objectives (SLO).
  • SLO Service Level Objectives
  • the latency with which the final receiving node RR 609 receives the packet is meant to be between the minimum and maximum latency values Imin and Imax.
  • FIG. 7 considers the node behavior for a pair of nodes from FIG. 6 in more detail.
  • FIG. 7 illustrates two of the latency based forwarding nodes of FIG. 6, such as RA 603 and RB 605, and the resource manager 61 1 .
  • Each of the nodes RA 603 and RB 605 includes a control plane 71 1 , 712 (such as based on an internet gateway protocol, IGP), a forwarding plane 731 ,732 and a latency based forwarding protocol queue or queues 741 ,742 in which the packets are placed for the next hop.
  • IGP internet gateway protocol
  • embodiments of latency based forwarding can be very generic in describing how forwarding nodes RA 603, RB 605, RC 607 can achieve this forwarding goal.
  • a centralized resource manager 61 1 can provide control/policy/data to forwarding nodes RA 603, RB 605, RC 607 and/or a distributed mechanism.
  • the number of hops from the current node to the destination and/or the minimal latency to the destination can be accessed, such as by being communicated by a “control plane” 71 1 ,712 (e.g., a protocol such as IGP or provisioned through a controller, for example).
  • this information can be added to a forwarding information database, or FIB, along with other information such as the next hop.
  • a forwarding plane 731 , 732 can be used to help steer the packet 621 on every forwarding node to the next-hop according to the packet’s destination parameter.
  • the packets With the LBF queue for the next hop 741 , 743, the packets will have updated edelay values 743, 744 that are provided to the forwarding plane of the next LBF node.
  • a packet can be entered into the queue based on its delay budget. If the edelay value of a packet is over the maximum latency, the packet can be discarded.
  • the LBF queue for the next hop 741 , 743 can be one or multiple queues and, for embodiments with multiple queues, the queues can be ranked or un-ranked.
  • the latency based forwarding machinery described with respect to FIG. 7 can be broadly implemented and sufficient to build vendor proprietary end-to-end networks with latency based forwarding. Flowever, it may be insufficient to build interoperable implementations of the nodes, such as RA 603 and RB 605 of FIG. 7, if both forwarding nodes are from different vendors and do not agree on a common policy. It can also be insufficient to allow for a third-party resource manager 611 to calculate the amount of resource required by latency based forwarding traffic and the delay bounds practically experienced by traffic flows based on the load of the traffic.
  • the following embodiments introduce latency based forwarding destination policies. These policies can enable parallel use of multiple end-to-end LBF policies in multi-vendor or standardized environments.
  • the destination policies can also enable accurate calculation and prediction of latencies and loads by external controller/admission systems.
  • the embodiments presented below introduce a “policy” parameter or metadata field into a packet’s LBF packet header.
  • the process can use per- destination egress queuing policy parameters (“LBF_dest_parameters”) that can be attached to a destination forwarding database (forwarding information base, or FIB).
  • a published or external API can be used to populate the LBF_dest_parameters.
  • the embodiments can introduce a function (“LBF_queuing_policy”) to map from LBF_dest_parameters to enqueuing LBF queue parameters, which can be designed to exploit a programmable Forwarding Plane Engine (FPE).
  • FPE programmable Forwarding Plane Engine
  • FIG. 8 illustrates the use of destination policies for latency based forwarding of packets and an embodiment for node behavior and node components.
  • FIG. 8 again shows a pair of LBF nodes RA 803 and RB 805 and a resource manager 81 1.
  • Each of LBF nodes RA 803, RB 805 again includes a control plane 81 1 , 812, a forwarding plane 831 , 832, the LBF queue for next hop 841 , 842, and also represents the updating of the packet’s delays at 843, 844.
  • each of LBF nodes RA 803 and RB 805 now also includes a destination forwarding database, or FIB (forwarding information base), 833, 834 and also schematically represents the enqueueing of packets at 851 , 852 and the dequeuing of packets at 853, 854.
  • FIB forwarding information base
  • a packet 821 again includes a network header, indicating a destination for the packet, and an LBF header indicating the parameters edelay, Imin, and Imax. Additionally, the LBF header of packet 821 now also includes a parameter indicating an LBF destination policy, lbf_policy.
  • an LBF destination policy can include one or more of LBF destination parameters, as illustrated at 823, an LBF mapping policy, as illustrated at 825, LBF queueing parameters, and an LBF queueing policy.
  • the elements of the embodiment of FIG. 8 added relative to the embodiment of FIG. 7 can be used to implement the destination policies for the latency based forwarding of packets.
  • entities independent of forwarding nodes such as a centralized Resource Manager 81 1 can calculate for each node and each required destination the policy and destination specific LBF_dest_params 823 and send them to each node, where they are remembered for later use by the forwarding plane 831 , 832, which is commonly in a component usually called the FIB 833, 834.
  • a distributed control plane protocol implemented in control plane 81 1 , 812 on every LBF forwarding node can perform the calculation of the LBF_dest_params for each required destination and send the result to the FIB 833, 834.
  • the distributed control plane protocol can be a so-called SPF (Sender Policy Framework) protocol like OSPF (Open Shortest Path First) or ISIS (Intermediate System to Intermediate System).
  • SPF Send Policy Framework
  • OSPF Open Shortest Path First
  • ISIS Intermediate System to Intermediate System
  • each LBF forwarding node is extended in support of LBF by the physical latency of each outgoing interface/next-hop to which LBF is to be supported.
  • this can be refined by fixed processing delays by this node set automatically or through configuration.
  • the control plane When performing the SPF calculation, in addition to calculating the shortest path/metric to each destination, the control plane also adds up the physical latency of each hop for the path to the destination. This sum becomes the “todelay” LBF_dest_parameter. The total number of hops on the shortest path to the destination becomes the“tohop” LBF_dest_parameter.
  • Equal Share Lmin Destination (ESLD) LBF policy as described below can be supported.
  • This embodiment is a dependency of the overall system and can enable parallel use of multiple end-to-end LBF policies in multi-vendor or standardized environments.
  • the forwarding plane 831 when a packet 821 is received by an LBF forwarding node such as RA 803, it is processed by a component called here the forwarding plane 831.
  • the forwarding plane 831 can use the destination field from the packet 821 to perform from the FIB 833 the nextjiop lookup and the newly introduced LBF_dest_params lookup for the destination 823.
  • the forwarding plane 831 then performs the calculation illustrated at 825 to calculate the LBF_queuing_params, where the formula for this function depends on the policy.
  • the forwarding plane 831 then enqueues the packet 821 , together with the LBF_queuing_params and the lbf_policy represented at 851 , into the LBF queue for the next hop 841.
  • the mechanisms for the control plane 811 , 812 and forwarding plane 831 , 832 described above enable support of multiple different LBF policies simultaneously.
  • the queuing policy for a specific packet is determined by the packet 821 lbf_policy, which is an identifier for the policy.
  • Any destination LBF policy can be constituted of: The control plane mechanisms necessary to derive the LBF_destination_params; the algorithm 825 to calculate LBF_queuing_params from LBF_destination_params and packet LBF parameters; and the behavior of the LBF queue, defined by the behavior for dequeuing 853, 854.
  • the LBF parameters 823 used during destination lookup by the forwarding plane 831 into the FIB 833 are as follows:
  • the LBF queuing parameters 825 that are attached to the packet when sending it to the LBF queue 841 can be as follows:
  • tqmin and tqmax are respectively a minimum and a maximum queueing time.
  • tqmin max(tnow + (Imin - edelay - todelay) / tohops, tnow)
  • tqmax tnow + (Imax - edelay - todelay) / tohops
  • q_policy queueing policy
  • FIG. 10 is a schematic diagram of multiple nodes connected to serve as inputs to a single node, which can lead to burst accumulation of packets at the receiving node.
  • a node such as router RA 1007
  • a traffic flow 101 1 from 1001 -1 and the latency that the queue 1009 of the outgoing interface may introduce to it.
  • LBF destination parameters are the same as in ESLD of LBF: todelay and tohops;
  • the node 141 1 maintains, at 141 1 , one or more queues as represented at 841 , 1341 , and 450, for example.
  • a queuing rank is determined for a packet (as illustrated in 841 and, for rank2, in 1341 ) at 1413.
  • a queue is determined for the packet at 1415, where in the example illustrated in FIG. 13 this can be based on rankl
  • FIG. 16 illustrates the use of a latency based forwarding embodiment using dynamic priority destination to more “fairly” forward packets with different static latencies.
  • DPD the priority of a packet to be dequeued is dynamically calculated based on the percentage of time into its dequeuing window [tqmin ... tqmax]. As shown in FIG. 16, each packet’s dequeuing priority starts at 0 at tqmin and ends at 1 at tqmax.
  • FIG. 16 illustrates the use of a latency based forwarding embodiment using dynamic priority destination to more “fairly” forward packets with different static latencies.
  • the dynamic priority destination policy embodiment described here has no Iqprio parameter, but rather uses an internally handled value in its dequeuing policy.
  • An embodiment for a DPD dequeuing policy can include:
  • Point 3 allows latency to be more fairly managed, particularly when traffic on the network is less bursty.
  • the SPD and DPD policies are complementary. As noted, the arrangements of FIG. 13 and FIG. 17 share a number of components. This can allow a network device to be able to implement both SPD and DPD for the different traffic classes with minimal overhead to support both.
  • FIG. 18 is a flowchart of one embodiment of the operation of latency based forwarding that can include dynamic priority destination policies.
  • a router or other network node receives LBF packets, such as 821 or 1221 , that have both a network header and an LBF header.
  • LBF packets such as 821 or 1221
  • a FIB 833, 1223 can determine the number of hops and estimate the fixed transfer times from the database 939, 1239.
  • the node receives the number of hops for each of the packets in 1803 and the estimates of fixed transfer times for each of the packets 1805. (Although typically both of these pieces of information for a packet are received from the FIB 833, 1223 together, these are separated into 1803 and 1805 for purposes of this discussion.)
  • the node can update the accumulated delay that each of the packets has experienced so far since it has left the sender and, at 1809, a minimum delay is determined for each of the packets.
  • the minimum delay, tqmin can be determined as described in the embodiments presented above, where, depending on the embodiment, a maximum delay tqmax can also be established. Although shown in a particular order in FIG. 18 for purposes of this discussion, 1803, 1805, 1807, and 1809 can be performed in differing orders and even concurrently depending on the embodiment.
  • the node maintains multiple ranked queues as represented at 1741 , where the queues themselves are ranked based on the static enqueueing priority eprio (rankl ) and the packets within each of the queues are ranked based on tqmin (rank2).
  • the packets To determine the queue of each of the packets, at 1813 the packets’ static enqueueing priorities (eprio) are determined.
  • a queuing rank is determined for each of the packets (as illustrated by rank2 in 1741 ) at 1815.
  • a queue is determined for each of the packets based on the parameters (eprio, or rankl ) at 1817. Once the queue and place within the queue are determined for the packets, each of the packets is entered into the determined queue and location at 1819.
  • processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and non removable media.
  • computer readable media may comprise computer readable storage media and communication media.
  • Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
  • some or all of the software can be replaced by dedicated hardware logic components.
  • illustrative types of hardware logic components include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc.
  • FPGAs Field-programmable Gate Arrays
  • ASICs Application-specific Integrated Circuits
  • ASSPs Application-specific Standard Products
  • SOCs System-on-a-chip systems
  • CPLDs Complex Programmable Logic Devices
  • special purpose computers etc.
  • software stored on a storage device
  • the one or more processors can be in communication with one or more computer readable media/ storage devices, peripherals and/or communication interfaces.
  • connection may be a direct connection or an indirect connection (e.g., via one or more other parts).
  • an element when an element is referred to as being connected or coupled to another element, the element may be directly connected to the other element or indirectly connected to the other element via intervening elements.
  • an element When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element.
  • Two devices are“in communication” if they are directly or indirectly connected so that they can communicate electronic signals between them.
  • the term“based on” may be read as“based at least in part on.”

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Latency Based Forwarding (LBF) techniques are presented for the management of the latencies, or delays, of packets forwarded over nodes, such as routers, of a network. In addition to a network header indicating a destination node for receiving the packet, a packet also includes an LBF header indicating the packets accumulated delay since leaving the sender and maximum and minimum latencies for the entire journey from the sender to the receiver. When a packet is received at a node, based on the accumulated delay, the maximum latency, and the minimum latency, the node places the packet in a forwarding queue to manage the delays between the sender and the receiver. The LBF can also indicate a policy for the forwarding node to used when determining the enqueueing of the packet. A dynamic queueing policy can increase the fairness with which packets are forwarded.

Description

LATENCY BASED FORWARDING OF PACKETS
WITH DYNAMIC PRIORITY DESTINATION POLICIES
Inventors:
Toerless Eckert
Alexander Clemm
RELATED APPLICATION
[0001] This application is related to commonly invented and commonly assigned U.S. Provisional Patent Application No. 62/820,350, filed March 19, 2019, which is hereby incorporated by reference into the present application.
TECHNICAL FIELD
[0002] The disclosure generally relates to communication networks, and more particularly, to the transmission of packets over networks.
BACKGROUND
[0003] Network packets are formatted units of data carried by packet-mode computer networks. High-precision networks demand high-precision service-level guarantees when delivering packets from a sending node on the network, such as a server, router or switch, to a receiving node on the network. Traditionally, networks are structured to deliver packets from the sending node to the receiving node as quickly as possible; however, this does not describe how to resolve priorities when packets have to compete to be forwarded; and there are circumstances where this is not the most effective technique for the transfer of packets.
SUMMARY
[0004] According to one aspect of the present disclosure, a node for transferring packets over a network includes a network interface configured to receive and forward a plurality of packets over the network, one or more queues configured to store packets to forward over the network, and one or more processors coupled to the one or more queues and the network interface. The one or more processors are configured to: receive, from the network interface, a plurality of packets each including a network header, indicating a network destination for the packet, and a forwarding header, the forwarding header indicating an accumulated delay experienced by the packet since being transmitted by a network sender and a maximum latency for the transfer of the packet from the network sender to the network destination; for each packet, update the accumulated delay experienced by the packet that is indicated by the forwarding header; enter each of the packets with the updated indicated accumulated delay experienced by the packet into one of the queues; for each packet, determine a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet; for each packet, determine a dynamic priority for the packet based at least on the packet’s maximum delay and an amount of time since the packet was received at the node; and sequentially transmit the packets over the interface from the one or more queues in an order based on the dynamic priorities of the packets.
[0005] Optionally, in the preceding aspect, the one or more processors are further configured to: for each packet, determine a static priority from the packet’s maximum delay and the packet’s updated accumulated delay; ranking the plurality of queues based upon the static priority of packets enqueued therein; for each packet, determine a queueing rank from the maximum delay, and enter each of the packets into one of the ranked queues. Each of the packets are entered into one of the ranked queues by: determining, based the packet’s static priority, into which of the ranked queues to enter the packet; and entering the packet into the determined one of the ranked queues based on the determined queueing rank for the packet. Sequentially transmitting the packets by the node in the order based on the dynamic priorities of the packets includes: performing an initial selection of packets from the ranked queues; and ordering packets from the initial selection based upon the dynamic priorities of the packets from the initial selection.
[0006] Optionally, in the preceding aspect, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the network sender to the network destination. For each of the packets, the one or more processors are further configured to: determine a minimum delay the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; further determining the dynamic priority from the minimum delay; further determining the static priority from the minimum delay; and further determining the queueing rank from the minimum delay.
[0007] Optionally, in any of the preceding aspects, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the network sender to the network destination. For each of the packets, the one or more processors are further configured to: determine a minimum delay for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; and further determine the dynamic priority from the minimum delay.
[0008] Optionally, in the preceding aspect, for each of the packets, the one or more processors are further configured to: determine whether the minimum latency exceeds the updated indicated accumulated delay experienced by the packet; and discard the packet in response to the packet’s minimum latency exceeding the updated indicated accumulated delay experienced by the packet.
[0009] Optionally, in any of the preceding aspects, the one or more processors are further configured to: receive a number of hops from the node to the network destination, and wherein the maximum delay for each of the packets is further determined based on the number of hops.
[0010] Optionally, in any of the preceding aspects, the one or more processors are further configured to: receive an estimated amount of time for fixed transfer times between the node and the network destination, and wherein the maximum delay for each of the packets is further determined based the estimated amount of time for fixed transfer times between the node and the network destination.
[0011] Optionally, in any of the preceding aspects, the node is a router.
[0012] Optionally, in any of the preceding aspects, the node is a networking switch.
[0013] Optionally, in any of the preceding aspects, the node is a server.
[0014] According to second set of aspects of the present disclosure, a method of transferring packets over a network includes: receiving, at a node, a plurality of packets each including a network header, indicating a network destination for the packet, and a forwarding header, the forwarding header indicating an accumulated delay experienced by the packet since being transmitted by a network sender and a maximum latency for the transfer of the packet from the network sender to the network destination; updating, by the node, for each of the packets the accumulated delay experienced by the packet that is indicated by the forwarding header; determining, by the node, for each of the packets a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet; determining, by the node, for each of the packets a dynamic priority for the packet from the packet’s maximum delay and an amount of time since the packet was received at the node; and sequentially transmitting the packets by the node in an order based on the dynamic priorities of the packets.
[0015] Optionally, in the preceding aspect, the method also includes: determining, by the node, for each of the packets a static priority for the packet based at least on the packet’s maximum delay and the packet’s updated accumulated delay; determining, by the node, for each of the packets a queueing rank for the packet based at least on the maximum delay; maintaining by the node a plurality ranked queues of packets for transmission from the node, the plurality of ranked queues being ranked based upon the static priority of packets enqueued therein; and entering, by the node, each of the packets into one of the ranked queues. Each of the packets are entered into one of the ranked queues by: determining, based the packet’s static priority, into which of the ranked queues to enter the packet; and entering the packet into the determined one of the ranked queues based on the determined queueing rank for the packet. Sequentially transmitting the packets by the node in the order based on the dynamic priorities of the packets includes: performing an initial selection of packets from the ranked queues; and ordering packets from the initial selection based upon the dynamic priorities of the packets from the initial selection.
[0016] Optionally, in the preceding aspect, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the network sender to the network destination, the method further comprising for each of packets: determining, by the node, a minimum delay at the node for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; further determining the dynamic priority from the minimum delay; further determining the static priority from the minimum delay; and further determining the queueing rank from the minimum delay.
[0017] Optionally, in any of the preceding aspects for the method of the second set of aspects, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the network sender to the network destination, the method further comprising for each of packets: determining, by the node, a minimum delay at the node for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; and further determining the dynamic priority from the minimum delay.
[0018] Optionally, in the preceding aspect, the method further includes: determining, for each of the packets, whether the minimum latency exceeds the updated indicated accumulated delay experienced by the packet; and in response to the packet’s minimum latency exceeding the updated indicated accumulated delay experienced by the packet, discarding the packet.
[0019] Optionally, in any of the preceding aspects for the method of the second set of aspects, the method further includes: receiving, at the node, a number of hops from the node to the network destination; and wherein determining the maximum delay for each of the packets is further based on the number of hops.
[0020] Optionally, in any of the preceding aspects for the method of the second set of aspects, the method further includes: receiving, at the node, an estimated amount of time for fixed transfer times between the node and the network destination; and wherein determining the maximum delay for each of the packets is further based on the estimated amount of time for fixed transfer times between the node and the network destination.
[0021] According to a further set of aspects of the present disclosure, a system for transmitting packets from a sending network device to a receiving network device includes one or more nodes connectable in series to transfer a plurality of packets from the sending network device to the receiving network device. Each of the nodes comprises: a network interface configured to receive and forward the packets over the network, each of the packets including a network header, indicating the receiving network device, and a forwarding header, indicating an accumulated delay experienced by the packet since being transmitted by the sending network device and a maximum latency for the transfer of the packet from the sending network device to the receiving network; one or more queues configured to store packets to forward over the network; and one or more processors coupled to the one or more queues and the network interface. The one or more processors are configured to: receive the plurality of packets from the network interface; for each packet, update the accumulated delay experienced by the packet that is indicated by the forwarding header; enter each of the packets with the updated indicated accumulated delay experienced by the packet into one of the queues; for each packet, determine a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet; for each packet, determine a dynamic priority for the packet based at least on the packet’s maximum delay and an amount of time since the packet was received at the node; and sequentially transmit the packets over the interface from the one or more queues in an order based on the dynamic priorities of the packets.
[0022] Optionally, in the preceding aspect, for each of the nodes, the one or more processors are further configured to: for each packet, determine a static priority from the packet’s maximum delay and the packet’s updated accumulated delay; rank the plurality queues based upon the static priority of packets enqueued therein; for each packet, determine a queueing rank from the maximum delay, enter each of the packets into one of the ranked queues by determining, based the packet’s static priority, into which of the ranked queues to enter the packet and entering the packet into the determined one of the ranked queues based on the determined queueing rank for the packet. Sequentially transmitting the packets by the node in the order based on the dynamic priorities of the packets includes: performing an initial selection of packets from the ranked queues; and ordering packets from the initial selection based upon the dynamic priorities of the packets from the initial selection.
[0023] Optionally, in the preceding aspect, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the sending network device to the receiving network device , and wherein, for each of the packets, the one or more processors of each of the nodes are further configured to: determine a minimum delay the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; further determining the dynamic priority from the minimum delay; further determining the static priority from the minimum delay; and further determining the queueing rank from the minimum delay.
[0024] Optionally, in any of the preceding further aspects for a system, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the sending network device to the receiving network device , and wherein, for each of the packets, the one or more processors of each of the nodes are further configured to: determine a minimum delay for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; and further determine the dynamic priority from the minimum delay.
[0025] Optionally, in the preceding aspect, for each of the packets, the one or more processors of each of the nodes are further configured to: determine whether the minimum latency exceeds the updated indicated accumulated delay experienced by the packet; and discard the packet in response to the packet’s minimum latency exceeding the updated indicated accumulated delay experienced by the packet.
[0026] Optionally, in any of the preceding further aspects for a system, the one or more processors for each of the nodes are further configured to: receive a number of hops from the node to the receiving network device , and wherein the maximum delay for each of the packets is further determined based on the number of hops.
[0027] Optionally, in any of the preceding further aspects for a system, the one or more processors of each of the nodes are further configured to: receive an estimated amount of time for fixed transfer times between the node and the receiving network device , and wherein the maximum delay for each of the packets is further determined based the estimated amount of time for fixed transfer times between the node and the receiving network device.
[0028] Optionally, in any of the preceding further aspects for a system, one or more of the nodes are routers.
[0029] Optionally, in any of the preceding further aspects for a system, one or more of the nodes are switches.
[0030] Optionally, in any of the preceding further aspects for a system, one or more of the nodes are servers.
[0031] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background. BRIEF DESCRIPTION OF THE DRAWINGS
[0032] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate like elements.
[0033] FIG. 1 illustrates an exemplary communication system for communicating data.
[0034] FIG. 2 illustrates an exemplary network of a series of nodes, such as routers, which may be included in one of the networks shown in FIG. 1.
[0035] FIG.3 is flowchart for one embodiment for the latency based forwarding of packets as illustrated in the example FIG. 2.
[0036] FIG. 4 is a schematic diagram illustrating exemplary details of a network device, or node, such as shown in the network of FIG. 2.
[0037] FIG. 5 provides an example of a network in which latency based forwarding (LBF) of packets can be implemented.
[0038] FIG. 6 is a high level overview for an embodiment of end-to-end latency based forwarding.
[0039] FIG. 7 considers the node behavior for a pair of nodes from FIG. 6 in more detail.
[0040] FIG. 8 illustrates the use of destination policies for latency based forwarding of packets and an embodiment for node behavior and node components.
[0041] FIG. 9 is a schematic representation of an embodiment for latency based forwarding of packets using an equal share minimum latency value destination policy.
[0042] FIG. 10 is a schematic diagram of multiple nodes connected to serve as inputs to a single node, which can lead to burst accumulation of packets at the receiving node.
[0043] FIG. 11 illustrates the forwarding of two traffic flows through an aggregation network of five nodes.
[0044] FIGs. 12 and 13 are a schematic representation of an embodiment for latency based forwarding of packets using strict priority destination policy.
[0045] FIG. 14 is a flowchart of one embodiment of the operation of latency based forwarding that can include destination policies and also strict priority destination policies. [0046] FIG. 15 is a schematic representation of how a sequence of packets with higher static priorities can cause a low static priority packet to be displaced from being forwarded.
[0047] FIG. 16 illustrates the use of a latency based forwarding embodiment using dynamic priority destination to more “fairly” forward packets with different static latencies.
[0048] FIG. 17 is a schematic representation of the queueing/dequeuing machinery for a dynamic priority destination queuing policy for the next hop in a network.
[0049] FIG. 18 is a flowchart of one embodiment of the operation of latency based forwarding that can include dynamic priority destination policies.
DETAILED DESCRIPTION
[0050] The present disclosure will now be described with reference to the figures, which in general relate to methods and devices (e.g., routers) to manage latencies when transferring packets over networks. It is understood that the present embodiments of the disclosure may be implemented in many different forms and that claim scope should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the inventive embodiment concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present embodiments of the disclosure, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details.
[0051] High-precision networks demand high-precision service-level guarantees that can be characterized through a set of Service Level Objectives (SLOs), which are performance goals for a service under certain well-defined constraints. A delay, or latency, based SLO can indicate a specific end-to-end delay, given a certain not-to- exceed packet rate or bandwidth. Examples can include an upper bound (“delay not to exceed”); a lower bound (less common, but useful in certain scenarios); and special cases such as an“in-time guarantee” (where an upper bound but not a lower bound is indicated) or an “on-time guarantee” (lower bound = upper bound). Previous approaches do not allow for the specification of quantifiable latency SLOs that are provided by the network, where any upper bound will typically indicate“low latency” without quantification, and any minimum latency or delay results from unplanned congestion at the buffers of the egress nodes, rather than being indicated. In the following,“delay” and“latency” are largely used interchangeably in terms of meaning, although in some cases these will be used to refer to differing quantities, such as when a“minimum latency” is used to indicate a lower bound of an end-to-end latency value while a“minimum delay” may be used to indicate an amount of time a packet is to spend at a particular node.
[0052] Examples of applications where in-time guarantees (where an upper bound but not a lower bound is indicated) can be of use are in Virtual Reality/Augmented Reality (VR/AR), which can have stringent limits on the maximum motion-to-photon time, such as to avoid dizziness and reduced quality of experience that can result from longer delays and may severely reduce user acceptance. Another example is for Tactile Internet having stringent limits to delay for haptic feedback, as a lack of sensation of being in control or sluggish control would make many applications infeasible. Further examples can include industrial controllers, that can have stringent limits to feedback control loops, and applications such as vehicle to everything (V2X), remote-controlled robots and drones, and similar cases.
[0053] On-time guarantees, which are stronger than in-time guarantees, can be used when application buffers cannot be assumed. On-time guarantees can provide fairness by not giving anyone an unfair advantage in multiparty applications and marketplaces, such as for trading or gaming (including those involving tactile internet). On-time guarantees can also be useful for synchronization in examples such as robot collaboration (e.g., lifting a packet by two remotely controlled robots) or high-precision measurements (e.g., remote polling at exact intervals).
[0054] The techniques presented in the following discussion provide a system that delivers packets that traverse a network in accordance with a quantified delay SLO. The SLO indicates a delay range with quantifiable lower and upper bounds that can be varied for each individual packet. Previous networking technologies do not provide this capability, but are instead typically engineered to“minimize” delay by using a range of techniques ranging from dimensioning links to reserving resources and performing admission control functions. These previous approaches are not engineered to hit a specific quantified delay or target, and there is no networking algorithm that would hit that delay as part of a function of the network itself. Instead, the technology presented here provides the capability to do this without need for centralized coordination and control logic, but in a way that is performed“in-network”, thereby reducing controller dependence. The technology presented here further does so in a way that keeps the buffers of egress edge devices small (to reduce cost) and in a way that SLO is adhered to for a“first packet” (and does not require connection setup / handshake).
[0055] The embodiments presented here include a network with network nodes which perform a distributed algorithm that can deliver packets in accordance with a delay SLO with quantifiable lower and upper delay bounds. The distributed algorithm processes a packet on each node as it traverses the network following a local algorithm that: measures the delay that has been incurred by the packet since it was sent by the source; determines the remaining delay budget, based on SLO, delay, and prediction of downstream delay; and speeds up or slows down the packet per an action that best fits the budget. Possible actions include matching queue delay to action, and selecting from a set of downstream paths based on expected delays or buffering. Optionally, when a packet is beyond salvaging, it may be dropped.
[0056] FIG. 1 illustrates an exemplary communication system 100 with which embodiments of the present technology can be used. The communication system 100 includes, for example, user equipment 1 10A, 1 10B, and 1 10C, radio access networks (RANs) 120A and 120B, a core network 130, a public switched telephone network (PSTN) 140, the Internet 150, and other networks 160. Additional or alternative networks include private and public data-packet networks, including corporate intranets. While certain numbers of these components or elements are shown in the figure, any number of these components or elements may be included in the system 100.
[0057] In one embodiment, the communication system 100 can include a wireless network, which may be a fifth generation (5G) network including at least one 5G base station which employs orthogonal frequency-division multiplexing (OFDM) and/or non- OFDM and a transmission time interval (TTI) shorter than 1 milliseconds (e.g. 100 or 200 microseconds), to communicate with the communication devices. In general, a base station may also be used to refer to any of the eNB and the 5G BS (gNB). In addition, the network may further include a network server for processing information received from the communication devices via the at least one eNB or gNB.
[0058] System 100 enables multiple users to transmit and receive data and other content. The system 100 may implement one or more channel access methods, such as but not limited to code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), or single-carrier FDMA (SC-FDMA).
[0059] The user equipment (UE) 1 10A, 1 10B, and 1 10C, which can be referred to individually as an UE 1 10, or collectively as the UEs 1 10, are configured to operate and/or communicate in the system 100. For example, an UE 1 10 can be configured to transmit and/or receive wireless signals or wired signals. Each UE 1 10 represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device, wireless transmit/receive unit (UE), mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, wireless sensor, wearable devices, consumer electronics device, device-to-device (D2D) user equipment, machine type user equipment or user equipment capable of machine-to-machine (M2M) communication, iPads, Tablets, mobile terminals, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, or other non-limiting examples of user equipment or target device.
[0060] In the depicted embodiment, the RANs 120A, 120B include one or more base stations (BSs) 170A, 170B, respectively. The RANs 120A and 120B can be referred to individually as a RAN 120, or collectively as the RANs 120. Similarly, the base stations (BSs) 170A and 170B can be referred to individually as a base station (BS) 170, or collectively as the base stations (BSs) 170. Each of the BSs 170 is configured to wirelessly interface with one or more of the UEs 1 10 to enable access to the core network 130, the PSTN 140, the Internet 150, and/or the other networks 160. For example, the base stations (BSs) 170 may include one or more of several well-known devices, such as a base transceiver station (BTS), a Node-B (NodeB), an evolved NodeB (eNB), a next (fifth) generation (5G) NodeB (gNB), a Home NodeB, a Home eNodeB, a site controller, an access point (AP), or a wireless router, or a server, router, switch, or other processing entity with a wired or wireless network. [0061] In one embodiment, the BS 170A forms part of the RAN 120A, which may include one or more other BSs 170, elements, and/or devices. Similarly, the BS 170B forms part of the RAN 120B, which may include one or more other BSs 170, elements, and/or devices. Each of the BSs 170 operates to transmit and/or receive wireless signals within a particular geographic region or area, sometimes referred to as a“cell.” In some embodiments, multiple-input multiple-output (MIMO) technology may be employed having multiple transceivers for each cell.
[0062] The BSs 170 communicate with one or more of the UEs 1 10 over one or more air interfaces (not shown) using wireless communication links. The air interfaces may utilize any suitable radio access technology.
[0063] It is contemplated that the system 100 may use multiple channel access functionality, including for example schemes in which the BSs 170 and UEs 1 10 are configured to implement the Long Term Evolution wireless communication standard (LTE), LTE Advanced (LTE-A), and/or LTE Multimedia Broadcast Multicast Service (MBMS). In other embodiments, the base stations 170 and user equipment 1 10A- 1 10C are configured to implement UMTS, HSPA, or HSPA+ standards and protocols. Of course, other multiple access schemes and wireless protocols may be utilized.
[0064] The RANs 120 are in communication with the core network 130 to provide the UEs 1 10 with voice, data, application, Voice over Internet Protocol (VoIP), or other services. As appreciated, the RANs 120 and/or the core network 130 may be in direct or indirect communication with one or more other RANs (not shown). The core network 130 may also serve as a gateway access for other networks (such as PSTN 140, Internet 150, and other networks 160). In addition, some or all of the UEs 1 10 may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols.
[0065] The RANs 120 may also include millimeter and/or microwave access points (APs). The APs may be part of the BSs 170 or may be located remote from the BSs 170. The APs may include, but are not limited to, a connection point (an mmW CP) or a BS 170 capable of mmW communication (e.g., a mmW base station). The mmW APs may transmit and receive signals in a frequency range, for example, from 24 GHz to 100 GHz, but are not required to operate throughout this range. As used herein, the term base station is used to refer to a base station and/or a wireless access point. [0066] Although FIG. 1 illustrates one example of a communication system, various changes may be made to FIG. 1 . For example, the communication system 100 could include any number of user equipments, base stations, networks, or other components in any suitable configuration. It is also appreciated that the term user equipment may refer to any type of wireless device communicating with a radio network node in a cellular or mobile communication system.
[0067] The networks 130, 140, 150 and/or 160 will commonly transfer data as packets, in which network packets are formatted units of data carried by a packet mode computer networks. The embodiments presented below are primarily concerned with the transmission of such packets over networks and the management of latencies of such transmissions.
[0068] FIG. 2 illustrates an example network 200 that includes networking devices 210a, 210b, 210c, 21 Od, and 21 Oe, which can be routers, networking switches, servers, or other networking devices. For example, networking device 210a could be a server sending packets, 210b, 210c, and 21 Od routers, and 21 Oe an edge device. To simplify the following discussion, these networking devices will often be referred to as nodes, but it will be understood that each of these can be various networking devices.
[0069] In the following, each of the nodes 210a, 210b, 210c, 21 Od, and 21 Oe can be referred to as a node 210, or which can be collectively referred to as the nodes 210. While only five nodes 210 are shown in FIG. 2, a network would likely include significantly more than four nodes 210. In much of the following discussion, the nodes will sometimes be referred to a routers, but will be understood to more generally be nodes. FIG. 2 also illustrates a network control/management plane 212 that is communicatively coupled to each of the routers or nodes 210 of the network, as represented in dashed lines. The control/management plane 212 can be used to perform a variety of control path and/or control plane functions. The control/management plane of a network is the part of the node architecture that is responsible for collecting and propagating the information that will be used later to forward incoming packets. Routing protocols and label distribution protocols are parts of the control plane.
[0070] FIG. 2 can be used to illustrate the management of latencies of packets as transmitted from a sending network device, or node, 210a over the intermediate nodes 21 Ob-21 Od to a receiving network device, or node, 21 Oe. FIG. 2 presents an example where the service level objective (SLO) indicates an end-to-end delay or latency with a lower bound of lb milliseconds and an upper bound of ub milliseconds and where lb and ub are both taken to be the same (an“on-time guarantee”) at 8ms. The total non queuing path propagation latency will be the sum of the fixed delays between each of the nodes and the local delays in each of the intermediate nodes. In the FIG. 2, the delay for a packet to travel between node 210a and node 210b is 1 ms and the delays between node 210b and node 210c, between node 210c and node 21 Od, and between node 21 Od and 21 Oe are all 500ps. The control/management plane 212 can notify each node of the number of remaining nodes, or hops, towards each possible destination and the predicted non-queuing propagation latency towards each destination calculated by adding up the non-queuing propagation latency of all hops from this node to a the destination. As described in more detail below, based upon this information and the amount of delay that the packet has experienced so far, the node can determine a local latency budget for the packet in the node.
[0071] Continuing with the example of FIG. 2, at node 210a the predicted propagation delay is (1 ms+500ps+500ps+500ps)=2.5ms and there are 4 hops to arrive at destination node 21 Oe, giving node 210a a local latency budget of:
(8-2.5)ms/4=1.375ms.
The amount of propagation time from node 210a to 210b is 1 ms and the control/management plane 212 provides node 210b with a predicted propagation delay of (500ps+500ps+500ps)=1 5ms and 3 remaining nodes. Taking the allotted 8ms for the entire end-to-end delay, subtracting the delay so far (1 ms propagation delay, 1 .375ms latency budgeted to node 210a) and predicted additional delay (1.5ms), and then dividing by the number of remaining nodes (3) gives a local budget for latency at node 210b of:
(8-2.375-1 5)ms/3=1 375ms.
The node 210b can determine when to transmit the packet based on this budget. The latency budget is similarly determined for node 210c based upon 8ms total delay, a delay so far of (1 .375+1 +1 .375+.500)ms, predicted additional delay of 1 ms, giving a latency budget for node 210c of:
(8-4.25-1 )ms/2=1 .375ms.
For node 21 Od, the latency budget is similarly calculated as: (8-6.125-0.5)ms/1 = 1 375ms.
With this budgeting, the packer arrives at node 21 Oe in (6.125+1 .375+0.5)ms=8.00ms, as wanted.
[0072] If the actual local delay or latency is not as predicted, the local latency budgets can be adjusted accordingly. For example, if there were 1 ms additional unexpected delay related to node 210b, either arising on node 210b itself or during propagation between node 210b and 210c, this loss can be taken out of local latency budgets of nodes 210c and 21 Od. Revising the calculation of the previous paragraph to add in this extra 1 ms delay, the local latency budget of 210c becomes:
(8-5.25-1 )ms/2=0.875ms.
The local latency budget of 21 Od becomes:
(8-7.125-0.5)ms/1 =0.875ms.
This again allows the packet to arrive at the designated lb==ub==8ms. As discussed in more detail below, when upper bound and lower bounds differ, both a minimum and a maximum lower latency budget are used:
Min-Local-latency-budget =
(lb - latency-in-packet - path-delay-to-destination)/number-hops-to-destination; and
Max-Local-latency-budget =
(ub - latency-in-packet - path-delay-to-destination)/number-hops-to-destination. In these expressions, “latency-in-packet” corresponds to the cumulative amount of delay or latency already experienced by the packet since leaving its source, and“path- delay-to-destination” is the expected amount of fixed transmission delay before the packet reaches its destination node.
[0073] FIG. 3 is flowchart for one embodiment for the latency based forwarding of packets as illustrated in the example FIG. 2. Beginning at 301 , a packet is received at an intermediate node, such as one of nodes 210b, 210c, or 21 Od in FIG. 2. Once the packet is received at 301 , the packet’s service level objective (SLO) can be determined from the packet’s header 303 and the delay experienced by the packet so far is assessed at 305.
[0074] The SLO of a packet can differ between packets and can be maintained in a packet’s forwarding header or other forwarding metadata and is determined by the node at 303. The SLO can indicate one or both of an upper bound and a lower bound for the total latency, or delay, for the packet as it is transmitted from the sending node (210a in FIG. 3) to the receiving node (21 Oe in FIG. 2).
[0075] The packet can also carry information on the accumulated delay metadata, such as the amount of accumulated delay or latency experienced by the packet so far since it left the sending node. (In much of the following, latency and delay are used interchangeably in this context.) In 305, the node assesses the delay and can also update the delay before passing the packet on to the next node. In some embodiments, the accumulated delay metadata can be a timestamp, where the accumulated delay can be assessed based on the difference between the current time and packet’s starting time, where the packet can carry its sending time (as a timestamp) the delay can be obtained by subtracting the sent time from the received time. This embodiment uses network time synchronization, but can keep the packet contents unaltered. In other embodiments, as discussed in more detail below, the packet can be changed to update the cumulative latency, where this approach does not require the synchronization of time across the different nodes. In other alternative embodiments, rather than assessing the current delay, the node can instead update the remaining SLO.
[0076] At 307, based upon the input of the packet’s SLO (from 303) and delay (from 305), the node can determine the delay budget for the packet. As illustrated in FIG. 2, where the information on the number of remaining hops and predicted propagation delay is provided by the control/management plane 212, a path predictor can also supply path propagation and delay predictions, such as the number of hops to the destination and the fixed delays between these hops. With respect to receiving the number of hops and fixed delays, a node can access this information in various ways in order to make better latency based forwarding decisions depending on the embodiment. Depending on the embodiment, this information can be stored/maintained on the node itself. In some embodiments, this information can be configured/provisioned using a control or management application. In other embodiments, it can be communicated using a control plane protocol such as IGP (internet gateway protocol). In general, this information can be communicated/received separately from the packet itself, and can involve a different mechanism. In one set of embodiments, a node can receive, or be aware, of this information by way of a forwarding information database (FIB) from where it is disseminated using a separate control plane mechanism (IGP, provisioning at the node via a controller, etc.). Consequently, the assessment of 307 can be based on the inputs of the remaining path information of: number of nodes remaining; the fixed delay for remainder of path, which can be computed from the number of links remaining with propagation delay and the possible number of nodes with fixed minimum processing delay; and receive information precomputed by the control/management plane 212 and disseminated along with path information by the control/management plane 212. The output of the assessment of 307 is the delay budget.
[0077] In one set of embodiments for the compute logic used by the node at 309, the fixed latencies and the current delay can be subtracted from the SLO, which can then be divided by the number of remaining nodes, as described above with respect to the local delay budgets of the nodes 210b, 210c, and 21 Od in FIG. 2. In some embodiments, the target latency or delay at the node can be based on the midpoint between lower bound and upper bound as determined from the packet’s SLO at 303. In other embodiments, for example, an adjustment step can be included, such as giving the budget closer to the lower bound if there are a large number of nodes remaining, such as, for example, the target amount of delay for the node can be set as target = midpoint-(midpoint-lower bound) / Remaining nodes. In one set of embodiments for the compute logic used by the node at 309, the fixed latencies and the current delay can be subtracted from the SLO, which can then be divided by the number of remaining nodes, as described above with respect to the local delay budgets of the nodes 210b, 210c, and 21 Od in FIG. 2. The assessment of delay budgets is described in more detail below.
[0078] Based on the delay budget from 307, at 309 the node can take a quality of service (QoS) action. For example, the node can maintain one or more queues in which it places packets ready for forwarding and then select a queue and a placement within the queue whose expected delay is the closest match for the packet’s target delay budget (e.g., the first queue whose delay is less than or equal to the target delay). The node can assess a queue’s latency as a function of queue occupancy, as well as other options, such as through the use of defined delay queues, for example. If the target delay budget is negative, a packet will miss its SLO. In case of a negative budget, depending on the embodiment the node could: discard or drop the packet; mark the packet as late, so that nodes downstream no longer need to prioritize the packet; or record an SLO violation in a statelet (e.g. update counter) of the packet. In other embodiments, the QoS action could include speeding up or slowing down a packet, or forwarding along a slower vs a faster path.
[0079] At 31 1 the packet is forwarded on the next node of its path. For example, after being entered into a queue based on its delay budget at 308, the packet would work its way up the queue until it is transmitted over the network.
[0080] FIG. 4 is a schematic diagram illustrating exemplary details of a node 400, such as a router, switch, server or other network device, according to an embodiment. The node 400 can correspond to one of the nodes 210a, 210b, 210c, 21 Od, or 21 Oe of FIG. 2. The router or other network node 400 can be configured to implement or support embodiments of the present technology disclosed herein. The node 400 may comprise a number of receiving input/output (I/O) ports 410, a receiver 412 for receiving packets, a number of transmitting I/O ports 430 and a transmitter 432 for forwarding packets. Although shown separated into an input section and an output section in FIG. 4, often these will be I/O ports 410 and 430 that are used for both down stream and up-stream transfers and the receiver 412 and transmitter 432 will be transceivers. Together I/O ports 410, receiver 412, I/O ports 430, and transmitter 432 can be collectively referred to as a network interface that is configured to receive and transmit packets over a network.
[0081] The node 400 can also include a processor 420 that can be formed of one or more processing circuits and a memory or storage section 422. The storage 422 can be variously embodied based on available memory technologies and in this embodiment is shown to have a cache 424, which could be formed from a volatile RAM memory such as SRAM or DRAM, and long-term storage 426, which can be formed of non-volatile memory such as flash NAND memory or other memory technologies. Storage 422 can be used for storing both data and instructions for implementing the packet forwarding techniques described here. Other elements on node 400 can include the programmable content forwarding plane 428 and the queues 450, which are explicitly shown and described in more detail below as they enter into the latency based packet forwarding methods developed in the following discussion. Depending on the embodiment, the programmable content forwarding plane 428 can be part of the more general processing elements of the processor 420 or a dedicated portion of the processing circuitry.
[0082] More specifically, the processor(s) 420, including the programmable content forwarding plane 428, can be configured to implement embodiments of the present technology described below. In accordance with certain embodiments, the memory 422 stores computer readable instructions that are executed by the processor(s) 420 to implement embodiments of the present technology. It would also be possible for embodiments of the present technology described below to be implemented, at least partially, using hardware logic components, such as, but not limited to, Field- programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc.
[0083] FIG. 5 provides an example of a network in which latency based forwarding of packets can be implemented. More specifically, FIG. 5 illustrates an aggregation ring, which is a common metropolitan broadband/mobile-access topology. In the example of FIG. 5, each of six ring routers (RA 501 , RB 503, RC 505, RD 507, RE 509, RF 51 1 ) is connected to 100 access nodes, spoke routers (RaO 501 -0 to Ra99 501 -99, RbO 503-0 to Rb99 503-99, RcO 505-0 to Rc99 505-99, RdO 507-0 to Rd99 507-99, ReO 509-0 to Re99 509-99, RfO 51 1 -0 to Rf99 51 1 -99). In FIG. 5, a packet is sent form the sending node Ra-0 501 -0 to the receiving node Re-0 509-0, traversing the ring nodes of routers RA 501 , RB 503, RC 505, RD 507, and RE 509.
[0084] Under prior queueing techniques for packets, at each of ring routers (RA 501 , RB 503, RC 505, RD 507, RE 509, RF 51 1 ) 99 packets, one for each other spoke router (e.g.: Ra1 501 -1 , ..., Ra99 501 -99) could arrive simultaneously (assuming all links have same speed). Without indicated minimum latencies for the packets, there is no mechanism for a router to establish which packets could be relatively delayed and no way to order the transmission of these packets. Consequenlty, some packets that could be delayed and still stay on budget may end up being queued in front of more urgent packets. The latency based forwarding introduced here allows for a packet with a lower delay SLO to be de-queued earlier than packets with a higher delay SLO. Under the latency based aproach, because each hop and queuing of prior hops reduces the acceptable per-hop delay, packets which have to cross more ring nodes would experience less per-hop delay in the nodes than those packets with the same SLO but travelling fewer hops. The latency based SLO therefore can provide fairer/more-equal delay across rings independently of how far away in the ring a sender and receivers are located. For example, minimum-delay can be set to be larger than the worst-case “across-ring” delay which results in same delivery latency independent of path in the absence of congestion. This only requires the simple LBF queuing option described with FIG. 9, in which dequeuing priorization only considers lb (lower bound SLO). When the more comprehensive LBF option described below with respect to FIG. 13 is used, then dequeuing will also prioritize packets under observation of ub (upper bound LBF), prioritizing packets with different path-lengths and SLO when under congestion.
[0085] FIG. 6 is a high level overview for an embodiment of end-to-end latency based forwarding (LBF) 600. As shown in FIG. 6, latency based forwarding provides a machinery for an end-to-end network consisting of a sending network device, or sender, RS 601 , receiving network device, or receiver, RR 609 and one or more intermediate or forwarding nodes. In FIG. 6, three intermediate nodes RA 603, RB 605 and RC 607 are shown. In FIG. 6, the fixed latency for transmission between a pair of nodes is 1 ms, and each of the intermediate nodes adds a delay of an LBF queue latency. In this way the locally incurred latency at the node is added to the total delay incurred so far, so that it can be used by the subsequent node as one of the inputs to make its decision. As a result, the total end-to-end latency between the sending router or node RS 601 and the receiving router or node RR 609 is managed.
[0086] A packet 621 includes a destination header that indicates its destination, RR 609 in this example. This destination header is used by each forwarding node RA 603, RB 605, RC 607 to steer packet 621 to the next forwarding node or final receiver RR 609. In addition to the network header, for latency based forwarding packet 621 adds three parameters to a forwarding header: edelay, Imin and Imax. Although the present discussion is primarily based on an embodiment where this forwarding metadata is carried by a forwarding, or LBF, header, in alternate embodiments the forwarding, of LBF, metadata can be in a packet that can, for example, be coupled with a command in an internet protocol. The edelay parameter allows for each forwarding node (RA 603, RB 605, RC 607) to determine the difference in time (latency) between when the node receives the packet and when the sender RS 601 has sent the packet. In one set of embodiments, the edelay parameter is the latency, or delay, encountered so far. It is updated at each node, which adds the latency locally incurred so far plus the known outgoing link latency to the next hop. In another set of embodiments, a sender timestamp is added once by the sender RS 601 , where subsequent nodes compute the latency (edelay) incurred so far by subtracting the sending timestamp from the current time. In the sender timestamp embodiments, the forwarding nodes RA 603, RB 605, RC 607 do not need to update the field, but this method does require a time-synchronized network. In other alternate embodiments, a desired time of arrival could also be indicated. The parameters Imin and Imax are respectively an end-to-end minimum and maximum latency for the Service Level Objectives (SLO). The latency with which the final receiving node RR 609 receives the packet is meant to be between the minimum and maximum latency values Imin and Imax.
[0087] FIG. 7 considers the node behavior for a pair of nodes from FIG. 6 in more detail. FIG. 7 illustrates two of the latency based forwarding nodes of FIG. 6, such as RA 603 and RB 605, and the resource manager 61 1 . Each of the nodes RA 603 and RB 605 includes a control plane 71 1 , 712 (such as based on an internet gateway protocol, IGP), a forwarding plane 731 ,732 and a latency based forwarding protocol queue or queues 741 ,742 in which the packets are placed for the next hop.
[0088] As shown in FIG. 7, embodiments of latency based forwarding can be very generic in describing how forwarding nodes RA 603, RB 605, RC 607 can achieve this forwarding goal. A centralized resource manager 61 1 can provide control/policy/data to forwarding nodes RA 603, RB 605, RC 607 and/or a distributed mechanism. In some embodiments, the number of hops from the current node to the destination and/or the minimal latency to the destination can be accessed, such as by being communicated by a “control plane” 71 1 ,712 (e.g., a protocol such as IGP or provisioned through a controller, for example). In some embodiments, this information can be added to a forwarding information database, or FIB, along with other information such as the next hop. A forwarding plane 731 , 732 can be used to help steer the packet 621 on every forwarding node to the next-hop according to the packet’s destination parameter. With the LBF queue for the next hop 741 , 743, the packets will have updated edelay values 743, 744 that are provided to the forwarding plane of the next LBF node. A packet can be entered into the queue based on its delay budget. If the edelay value of a packet is over the maximum latency, the packet can be discarded. Depending on the embodiment, the LBF queue for the next hop 741 , 743 can be one or multiple queues and, for embodiments with multiple queues, the queues can be ranked or un-ranked.
[0089] The latency based forwarding machinery described with respect to FIG. 7 can be broadly implemented and sufficient to build vendor proprietary end-to-end networks with latency based forwarding. Flowever, it may be insufficient to build interoperable implementations of the nodes, such as RA 603 and RB 605 of FIG. 7, if both forwarding nodes are from different vendors and do not agree on a common policy. It can also be insufficient to allow for a third-party resource manager 611 to calculate the amount of resource required by latency based forwarding traffic and the delay bounds practically experienced by traffic flows based on the load of the traffic. Additionally, it may be insufficient to define extensions to distributed control planes 711 , 712 to support latency based forwarding operations and also insufficient to establish a generic extendable framework for programming different LBF policies, especially if programming of LBF nodes is made available to third-parties.
[0090] To overcome these situations, the following embodiments introduce latency based forwarding destination policies. These policies can enable parallel use of multiple end-to-end LBF policies in multi-vendor or standardized environments. The destination policies can also enable accurate calculation and prediction of latencies and loads by external controller/admission systems.
[0091] The embodiments presented below introduce a “policy” parameter or metadata field into a packet’s LBF packet header. The process can use per- destination egress queuing policy parameters (“LBF_dest_parameters”) that can be attached to a destination forwarding database (forwarding information base, or FIB). A published or external API can be used to populate the LBF_dest_parameters. The embodiments can introduce a function (“LBF_queuing_policy”) to map from LBF_dest_parameters to enqueuing LBF queue parameters, which can be designed to exploit a programmable Forwarding Plane Engine (FPE). A published or defined “LBF queuing policy” can be designed to expect/assume a“PHB” (Per Hop Behavior) as used for standardized “queuing” behavior specifications. Examples of LBF destination policies can include an Equal Share Lmin Destination (ESLD) LBF policy function and the flooding of configured/measured link propagation latency as a new IGP parameter. [0092] FIG. 8 illustrates the use of destination policies for latency based forwarding of packets and an embodiment for node behavior and node components. FIG. 8 again shows a pair of LBF nodes RA 803 and RB 805 and a resource manager 81 1. Each of LBF nodes RA 803, RB 805 again includes a control plane 81 1 , 812, a forwarding plane 831 , 832, the LBF queue for next hop 841 , 842, and also represents the updating of the packet’s delays at 843, 844. Relative to FIG. 7, each of LBF nodes RA 803 and RB 805 now also includes a destination forwarding database, or FIB (forwarding information base), 833, 834 and also schematically represents the enqueueing of packets at 851 , 852 and the dequeuing of packets at 853, 854.
[0093] A packet 821 again includes a network header, indicating a destination for the packet, and an LBF header indicating the parameters edelay, Imin, and Imax. Additionally, the LBF header of packet 821 now also includes a parameter indicating an LBF destination policy, lbf_policy. As is discussed in more detail below, an LBF destination policy can include one or more of LBF destination parameters, as illustrated at 823, an LBF mapping policy, as illustrated at 825, LBF queueing parameters, and an LBF queueing policy.
[0094] The elements of the embodiment of FIG. 8 added relative to the embodiment of FIG. 7 can be used to implement the destination policies for the latency based forwarding of packets. For example, entities independent of forwarding nodes, such as a centralized Resource Manager 81 1 can calculate for each node and each required destination the policy and destination specific LBF_dest_params 823 and send them to each node, where they are remembered for later use by the forwarding plane 831 , 832, which is commonly in a component usually called the FIB 833, 834. In another example, a distributed control plane protocol implemented in control plane 81 1 , 812 on every LBF forwarding node can perform the calculation of the LBF_dest_params for each required destination and send the result to the FIB 833, 834.
[0095] To consider one embodiment further, the distributed control plane protocol can be a so-called SPF (Sender Policy Framework) protocol like OSPF (Open Shortest Path First) or ISIS (Intermediate System to Intermediate System). The SPF calculation can be performed as follows:
a) The SPF protocol configuration on each LBF forwarding node is extended in support of LBF by the physical latency of each outgoing interface/next-hop to which LBF is to be supported. Optionally, this can be refined by fixed processing delays by this node set automatically or through configuration.
b) Each SPF protocol floods its interface’s physical latency through an appropriate extension message of the IGP.
c) When performing the SPF calculation, in addition to calculating the shortest path/metric to each destination, the control plane also adds up the physical latency of each hop for the path to the destination. This sum becomes the “todelay” LBF_dest_parameter. The total number of hops on the shortest path to the destination becomes the“tohop” LBF_dest_parameter.
With these two LBF_dest_parameters (todelay, tohops), an Equal Share Lmin Destination (ESLD) LBF policy as described below can be supported. This embodiment is a dependency of the overall system and can enable parallel use of multiple end-to-end LBF policies in multi-vendor or standardized environments.
[0096] Considering the operation of the forwarding plane 831 , 832, when a packet 821 is received by an LBF forwarding node such as RA 803, it is processed by a component called here the forwarding plane 831. The forwarding plane 831 can use the destination field from the packet 821 to perform from the FIB 833 the nextjiop lookup and the newly introduced LBF_dest_params lookup for the destination 823. The forwarding plane 831 then performs the calculation illustrated at 825 to calculate the LBF_queuing_params, where the formula for this function depends on the policy. The forwarding plane 831 then enqueues the packet 821 , together with the LBF_queuing_params and the lbf_policy represented at 851 , into the LBF queue for the next hop 841.
[0097] The mechanisms for the control plane 811 , 812 and forwarding plane 831 , 832 described above enable support of multiple different LBF policies simultaneously. The queuing policy for a specific packet is determined by the packet 821 lbf_policy, which is an identifier for the policy. Any destination LBF policy can be constituted of: The control plane mechanisms necessary to derive the LBF_destination_params; the algorithm 825 to calculate LBF_queuing_params from LBF_destination_params and packet LBF parameters; and the behavior of the LBF queue, defined by the behavior for dequeuing 853, 854. The following describes these aspects for an Equal Share Lmin Destination, or ESLD, LBF policy embodiment, where the name is chosen to indicate that this policy is primarily concerned about managing Imin, but only does minimal support for Imax.
[0098] In an ESLD LBF policy embodiment, the LBF parameters 823 used during destination lookup by the forwarding plane 831 into the FIB 833 are as follows:
LBF_dest_params: todelay, tohops
As described above, these parameters can be derived from centralized or distributed control plane functions. The LBF queuing parameters 825 that are attached to the packet when sending it to the LBF queue 841 can be as follows:
LBF_queuing_parameters: tqmin, tqmax ,
where tqmin and tqmax are respectively a minimum and a maximum queueing time.
[0099] The ESLD function 825 mapping from LBF_dest_params and packet 821 parameters to LBF_queuing parameters can be as follows:
tqmin = max(tnow + (Imin - edelay - todelay) / tohops, tnow) tqmax = tnow + (Imax - edelay - todelay) / tohops
if(tqmax < 0) -> LBF early packet discard
In the above, tnow is the time at which 831 performs the calculation, which is the time when packet 821 is enqueued into the LBF queue 841 . The LBF early packet discard function is included for completeness, as the LBF queuing policy may be assumed to expect that tqmin >= tnow and tqmax >= tnow.
[00100] In the described embodiment for ESLD, the dequeuing policy 853 can operate in the abstract starting when a packet is received with tqmin and is buffered by the LBF queue until tnow = Imin. At tnow = Imin, the packet is passed on to a FIFO queue from which packets are dequeued as soon as the outgoing interface to the next- hop can send another packet. Other embodiments can be used, but this is the simplest queuing behavior in router/switches. When at tsend the packet could be sent to the outgoing interface, and when tsend > tqmax, the packet is discarded or, alternately, the packet can be marked with a notification that it has already exceeded its maximum latency and then sent.
[00101] FIG. 9 is a schematic representation of an embodiment for latency based forwarding of packets using an equal share Imin destination policy. An IGP link state database 939 is represented schematically for the current node of router RA 803 and next hop RB 805 and destination RR 809, for a total of two hops. In this example, the fixed pair-wise latency between each of these is 1 ms. This corresponds to the data base information of: Destination=RR; Nexthop=RB; tohops=2; and todelay=2ms. This information is supplied to the forwarding database FIB 833.
[00102] The forwarding plane 831 receives the packet, extracts the packet’s destination, performs a FIB lookup, and receives back from the FIB 833 the nexthop, tohops, and todelay values. The forwarding plane 831 also extracts the LBF policy from the packet to obtain the function or functions (fn) to be used for the LBF mapping policy. In the ESLD embodiment, the computations include a minimum and a maximum queueing time:
tqmin = max(tnow + (Imin - edelay - todelay) / tohops, tnow) tqmax = tnow + (Imax - edelay - todelay) / tohops In the embodiment illustrated in FIG. 9, an early packet discard is included if the packet will not be able to meet its maximum latency, tqmax < tnow. The packet 821 can then be enqueued into the LBF queue 841 based on the packet’s queueing policy (q_policy), its tqmin value, and tqmax value.
[00103] The LBF queue 841 can be part of the one or more queues 450 of FIG. 4. The packet 821 is ranked based upon a rank=tqmin, and can then be inserted in the queue of PIFO (Push In First Out) 947 based on its rank. The packets in PIFO 947 are then passed into FIFO (First In First Out) 949, being dequeued from the PIFO 947 based on the rank being less than or equal to tnow. The packets, including packet 821 , in the FIFO 949 can then be dequeued when the interface (i.e. , transmitter 432 and I/Os 430 of FIG. 4) is free. If a packet in FIFO 949 has tqmax > tnow, it will not meet its Imax and it can be discarded or marked with a notification.
[00104] Other embodiments can use LBF policies other than ESLD, but ESLD illustrates an embodiment to provide a complete working machinery for implementing LBF destination policies. ESLD can have the benefits of a simple implementation, splitting up buffering equally across each hop and removing the need for high-buffering on the receiver or last-hop router, and can reduce burstiness across multiple hops incrementally on every hop.
[00105] As described above, the use of destination policies for latency based forwarding provides the ability to define the LBF policies well enough so that they can be used in conjunction with resource management systems. Embodiments for implementing LBF policies with low levels of complexity in high speed forwarding hardware can be implemented through the components of an extension to FIB lookups and simple calculations to derive queuing parameters.
[00106] The extension to FIB lookups allows latency based forwarding to function by only requiring parameters to be attached to the FIB, which can be a convenient way to implement LBF in high-speed router or switch hardware. The term FIB is used here because it is well recognized in the industry, but, more generally, this is not meant to imply assumptions made as to how it works or is implemented, other than that it is expected to act as an interface that allows the system to derive LBF_parameters from a lookup of a packet’s destination. Setting up these destinations in the LBF_parameter mappings is something that, for example, can be supported externally through a third- party PCE (Path Computation Element) or controller, or that could be part of a standards specification of an extension to a distributed routing protocol like OSPF or ISIS.
[00107] With respect to the simple calculations (in, e.g., the P4 programming language) to derive queuing parameters, the parameters that can be attached to destination lookups alone could make it very difficult to directly build a queue. Instead, one readily feasible design in high speed forwarding hardware is to leverage the programmability of the forwarding engine to calculate the queuing parameters from which the actual queueing behavior can be defined. The calculations described above for ESLD are easily programmed in the P4 forwarding engine programming language, for example.
[00108] Next, the extension of latency based forwarding destination policies is extended to“strict priority destination” (SPD) policies. These SPD polices extend the destination polices described above to further take into account the maximum latencies of a packet in the forwarding process. The usefulness of such policies can be illustrated with respect to FIG. 10.
[00109] FIG. 10 is a schematic diagram of multiple nodes connected to serve as inputs to a single node, which can lead to burst accumulation of packets at the receiving node. In FIG. 10, a node, such as router RA 1007, has a large number (here, 100) input nodes or routers 1001 -1 , ..., 1001 -99 that all can send traffic that will end up on a queue 1009 of outgoing interface. Of interest here is a traffic flow 101 1 from 1001 -1 and the latency that the queue 1009 of the outgoing interface may introduce to it. [00110] In prior asynchronous queuing mechanisms, it cannot be avoided that a packet can arrive simultaneously on every incoming interface. This can then result in the worst case in which queue 1009 is queueing up 100 packets and de-queuing them. In none of the prior queuing schemes is there a way to order this de-queuing such that packets with less of an effective latency budget for reaching the destination will be preferred over packets that have more of a remaining latency budget. The situation can be considered further by looking at a typical use-case as presented in FIG. 1 1 , which may be a metropolitan aggregation network, such as a so-called "market network" of an MSO (Multi Service Operator), Backhaul Network of a Mobile Operator or the whole network of a metropolitan operator.
[00111] FIG. 1 1 illustrates the forwarding of two traffic flows through an aggregation network of five nodes of the routers RA, ..., RE (1 107, 1 1 17, 1 127, 1 137, 1 147). Each of these aggregation routers has 100 connecting routers: 1 101 -0, ..., 1 101 -99 for router RA 1 107; 1 1 1 1 -0, ... , 1 1 1 1 -99 for router RB 1 1 17; 1 121 -0, ... , 1 121 -99 for router RC 1 127; 1 131 -0, ... , 1 131 -99 for router RD 1 137; and 1 141 -0, ..., 1 141 -99 for router RE 1 141 -0. FIG. 1 1 considers two traffic flows, [A] 1 199-A from router 1 101 -0 to router 1 141 -0 and [B] 1 199-B from router 1 1 1 1 -0 to router 1 131 -0.
[00112] Flow [A] 1 199-A has to pass through 4 queues (1 109, 1 1 19, 1 129, 1 139) in which it can accumulate (as explained with respect to FIG. 10) up to 99x the latency introduced by serializing one packet. Flow [B] 1 199-B only has to pass through 2 queues (1 1 19, 1 129) where it can accumulate this amount of latency, so [B] 1 199-B has to suffer through less possible queuing latency than flow [A] 1 199-A.
[00113] If a network provider wants to offer very low latency services through this network, then the absolute latency it can offer is highly dependent on the point of attachment of senders and receivers. Without an arrangement such as that presented here, the service latency possible for flow [A] 1 199-A would be much worse than that that could be offered for flow [B] 1 199-B, and this highly varying degree of service guarantees for flows with different points of attachment to the network makes it hard to effectively offer and plan for these services. It also does not allow offering of the best possible (i.e. , lowest latency) services for the worst possible paths.
[00114] One reason for the difference in low-latency quality experience between flow [A] 1 199-A and flow [B] 1 199-B is the difference in the propagation delay because flow [A] 1 199-A has to pass through two more links (link from RA 1 107 to RB 1 1 17 and link from RD 1 137 to RD 1 147) than flow [B] 1 199-B. The latency based forwarding techniques described so far can address this issue because the network operator could ensure that flow [A] 1 199-A and flow [B] 1 199-B have an end-to-end Imin SLO that would be larger than the worst path (the one for flow [A] 1 199-A). As a result, the latency based forwarding processing for Imin would ensure that flow [B] 1 199-B would be delayed to not be faster than flow [A] 1 199-A. However, the latency based forwarding destination policy described so far does not help to overcome the problem described for FIG. 10: For example, with the Equal Share Lmin Destination (ESLD) policy, both flow [A] 1 199-A and flow [B] 1 199-B flows would need to account for the 99x latency introduced by each of the considered queues.
[00115] The following discussion defines an LBF destination policy that allows the network to reduce the maximum queuing latency for the problem described for FIGs. 10 and 1 1 when the competing flows differ in their LBF SLO. For example, considering the queuing delay for flow [A] 1 199-A on queues 1 1 19, 1 129 and 1 139, the fact that the number of queuing hops and physical propagation of a path from router A to router E is higher/larger than that of any other path crossing through these queues will allow for this traffic to be automatically prioritized and guarantee lower maximum latency for flow [A] 1 199-A. More specifically, the following introduces a new LBF destination policy called “strict priority destination” (SPD) that manages queuing latency created by burst loads as described above, improving the latency based forwarding ESLD policy which simply relies on a FIFO to manage these queuing burst latencies. The embodiments described can be targeted to improve maximum latency in explicitly admitted/managed traffic such as in Time-Sensitive Networking (TSN)/Deterministic Networking (DetNet) and similar environments.
[00116] In addition to the previously introduced LBF queueing parameters of tqmin and tqmax, the strict priority destination queueing policy introduces an additional local queueing priority, Iqprio. Consequently, the SPD LBF queuing policy has three LBF_queuing_params: tqmin, tqmax and Iqprio. The SPD LBF queuing policy achieves the following externally observable queuing behaviors:
1 ) Packets are never de-queued/sent before their tqmin time;
2) Packets are not sent, but discarded if they cannot be sent at or before their tqmax time; and 3) When at any point in time the outgoing interface (or other further queue) can accept another packet, the packet sent will be the packet that has: a) the smallest Iqprio; and amongst those a) packets, the one packet with the earliest tqmin that also meets point 1 and 2.
Points 1 and 2 are common with the ESLD queueing policy, the point 3 is newly introduced for strict priority destination policies.
[00117] Actual implementations may not achieve 100% accuracy of these desired external observation points due to factors such as timing inaccuracies or approximating this behavior through price optimized implementations, but an implementation that approximates this observable external behavior under a variety of traffic loads can be considered to be an implementation of this target external behavior. In one set of embodiments, the destination policies described above can be combined with the following defined components to create the “strict priority destination” (SPD) policy:
1 ) The LBF destination parameters are the same as in ESLD of LBF: todelay and tohops;
2) The LBF mapping policy to calculate tqmin and tqmax is the same as in ESLD of LBF; and
3) Iqprio = tqmax - tqmin .
[00118] The SPD LBF policy can be used to not only process packets with LBF SLO parameters (Imin, Imax, edelay), but also best-effort packets without these parameters. Packets without these parameters can be enqueued into the SPD queue with tqmin = tnow (where tnow is the enqueuing time), tqmax = MAXQUEUE and Iqprio = MAXBUDGET, where MAXQUEUE and MAXBUDGET are higher than the maximum supported values for tqmx and Iqprio derived from packets with LBF SLO parameters. This assumes that there is a system-wide lower limit on the packets Imin, Imax parameters: for example, Imin, Imax < 50 ms, hence MAXQUEUE = MAXBUDGET = 50 ms.
[00119] To provide an example of the SPD queuing and policy and its benefits, an example in which it is applied to the situation of FIG. 1 1 can be considered. For this example, all of the links in FIG. 1 1 are taken to be 100 mbps and have a physical propagation latency of l OOpsec. Packets of flow [A] 1 199-A and flow [B] 1 199-B are given an Imin of Ops and an Imax of 1000ps. When flow [A] 1 199-A and flow [B] 1 199- B packets are enqueued into router RB 1 1 17 and outgoing interface SPD queue 1 1 19 did not have any queuing latency incurred previously, their parameters will be:
[A] packet: edelay = 200 [usee], Imin = 0, Imax = 1000 [usee]
dest params: todelay = 400 [usee], tohops = 4,
queue params: tqmin = tnow
tqmax = (Imax - edelay - todelay) / tohops = 100 + tnow Iqprio = 100
[B] packet: edelay = 100 [usee], Imin = 0, Imax = 1000 [usee]
dest params: todelay = 300 [usee], tohops = 3,
queue params: tqmin = tqnow
tqmax = (Imax - edelay - todelay) / tohops = 200 + tnow Iqprio = 200
As a result, the SP queuing policy will always dequeue flow [A] 1 199-A packets before flow [B] 1 199-B packets and ensure that any queuing delay is shifted to flow [B] 1 199- B packets because Iqprio of flow [A] 1 199-A is higher (lower value) than Iqprio of flow [B] 1 199-B.
[00120] If all 100 nodes 1 1 1 1 -0, ..., 1 1 1 1 -99 would have traffic flows with similar SLO parameters as flow [B] 1 199-B, then even with the furthest destination being in a node 1 141 -0, ..., 1 141 -99, they would not have an Iqprio as low as flow [A] 1 199-A and therefore would be dequeued after flow [A] 1 199-A. As a result, flow [A] 1 199-A would have only a queuing latency of one packet (in transit) vs. 99 packets on queue 1 1 19.
[00121] FIGs. 12 and 13 are a schematic representation of an embodiment for latency based forwarding of packets using strict priority destination policy. FIGs. 12 and 13 repeat many of the elements of FIG. 9, and include similarly numbered elements (i.e. , 1239 and 939 for the IGP link state database), but now for an embodiment with a strict priority destination policy.
[00122] FIG. 12 corresponds to the upper part of FIG. 9, where the packet 1221 corresponds to the packet 821 , the FIB 1233 corresponds to the FIB 833, the IGP link state database 1239 corresponds to (and has the same example values) as IGP link state database 939, and forwarding plane 1231 corresponds to forwarding plane 831 , except now the parameters include the local queueing priority Iqprio. [00123] FIG. 13 presents a strict priority queueing policy for the next hop 1341 that replaces the LBF queue 841 of the lower part of FIG. 9. FIG. 13 now illustrates multiple queues 1361 , 1363, ... , 1365, ... , 1367, where the packets in each of the queues are ranked on a rank2, corresponding to tqmin, and the different queues are ranked according to a rankl , corresponding to Iqprio. The packets are dequeued by finding the highest rankl packet ready to send and where expired packets are dequeued.
[00124] As illustrated in the embodiment of FIG. 13, there is a PIFO (1361 , 1363, ..., 1365..., 1367) for each Iqprio or bins of Iqprio values, with the PIFO queues ranked (rank 1 ) by Iqprio. In FIG. 13, all packets in PIFO 1361 , for example, have the lowest Iqprio value == highest priority. Within each PIFO, tqmin is the rank function (rank 2), so packets get inserted in the order of their tqmin target dequeuing time. The de queuing function only need examine the top packet in each PIFO. The de-queuing function attempts, at each point in time when it can send a packet, to dequeue the highest priority (lowest Iqprio value) head of PIFO packet for which tqmin < tnow < tqmax. When any PIFO head has an expired packet (tnow > tqmx), then that packet is discarded and the next packet (now head of that PIFO) is examined.
[00125] FIG. 13 shows an example of this algorithm: 1371 are the head-of-PIFO packets. The de-queuing function first examines 1391 , but determines it cannot yet be sent (tnow > tqmin). The dequeue function then examines 1392 and finds it to be expired. It discards 1392. It then examines 1393 and finds that it cannot yet be sent. It then examines 1394 and finds that it can be sent. Finally, 1394 is de-queued and sent.
[00126] FIG. 14 is a flowchart of one embodiment of the operation of latency based forwarding that can include destination policies and also strict priority destination policies. Starting at 1401 , a router or other network node receives an LBF packet, such as 821 or 1221 , that has both a network header and an LBF header. Based on this information, a FIB 833, 1233 can determine the number of hops and estimates of the fixed transfer times from the database 939, 1239. The node receives the number of hops at 1403 and the estimates of fixed transfer time 1405, where, although typically both of these pieces of information are received from the FIB 833, 1233 together, they are separated into 1403 and 1405 for purposes of this discussion.
[00127] At 1407, the node can update the accumulated delay that the packet has experienced so far since it has left the sender and, at 1409, a minimum delay is determined for the packet. The minimum delay, tqmin, can be determined as described in the embodiments presented above, where, depending on the embodiment, a maximum delay tqmax and a local queue priority Iqprio can also be established. Although shown in a particular order in FIG. 14 for purposes of this discussion, many of the steps can be performed in other orders. For example, any of steps 1403, 1405, 1407, and 1409 can be performed in differing orders and even concurrently depending on the embodiment.
[00128] The node 141 1 maintains, at 141 1 , one or more queues as represented at 841 , 1341 , and 450, for example. Based on the parameters from 1409 (one or more of tqmin, tqmax, and Iqprio), a queuing rank is determined for a packet (as illustrated in 841 and, for rank2, in 1341 ) at 1413. For embodiments with multiple queues for the packets, as illustrated in 1341 , a queue is determined for the packet at 1415, where in the example illustrated in FIG. 13 this can be based on rankl Once the queue and place within the queue are determined, the packet is entered into the determined queue and location at 1417.
[00129] Once entered into to a queue, the packet can be transmitted at 1423. For example, in the embodiment illustrated with respect to 841 in FIG. 9, the packet will be forwarded when it gets to the front of the FIFO 949; and in FIG. 13, this will occur when tqmin < tnow < tqmax as discussed above with respect to 1341 . Before forwarding, in some embodiments the packet is checked at 1419 to see whether it has exceeded its maximum latency and, if so, discarded at 1421 . In other embodiments, a late packet can be marked as such and still transmitted at 1423.
[00130] The techniques described above allow for the ability of congestion (burst) management of Imax (maximum end-to-end latency), by shifting queuing delay proportionally to each flow based on its queuing budget. The queuing budget is then used as the priority parameter in the SPD queuing. The latency based forward techniques can also play an important role in addressing concerns about absolute maximum end-to-end latency, Time-Sensitive-Networking and Deterministic Networking. As mentioned above, previous approaches have provided no solution that can manage queuing latency of the differences between paths (such as different number of hops, or differences in already encountered queuing latency).
[00131] In the strict priority destination (SPD) policies described in the preceding paragraphs and illustrated by the embodiment of the queueing structure of FIG. 13, the ranking of the ranked queues (“rank2”) is based on a local queueing priority Iqprio = tqmax - tqmin. In some implementations of SPD, this can result in a packet with a relatively large difference between its tqmax and tqmin values, and hence a large Iqprio, being constantly pushed behind higher priority (i.e. , smaller Iqprio) packets. The result can be that a packet with a large Iqprio keeps getting pushed back in the line of packets being forwarded, missing its maximum latency, and being discarded. To mitigate this situation, a“dynamic priority destination” (DPD) can be introduced. This situation can be illustrated with respect to FIG. 15.
[00132] FIG. 15 is a schematic representation of how a sequence of packets with higher static priorities can cause a low static priority packet to be displaced from being forwarded. In FIG. 15, under SPD the priorities of the packets are static. In this example, a sequence of packets Packet 1 1501 , Packet 2 1502, Packet 3 1503, and Packet 4 1504 all have a high priority (small Iqprio) as their tqmax and tqmin vlaues are close. Theses packets will always have higher priority than Packet 5 1505, which has a much larger difference between its tqmax and tqmin vlaues. If 1505 was continuously competing directly with such higher priority packets, it would never be sent, but be discarded after its tmax. As this illustrates, the general situation is that the lower the priority (larger Iqprio value) of a packet, the less service it will get.
[00133] To further improve operation, the following paragraphs introduce a destination policy called dynamic priority destination (DPD) that manages queuing latency with the target giving strict latency goals to packets according to their end-to- end SLO derived tqmin/tqmax. In dynamic priority destination policies, the priority of a packet grows from 0 (lowest priority) at the beginning of its local sending time (tqmin) to 1 (highest priority) at its latest local sending time (tqmax). This results in a dequeuing that is very dynamic and fair over time, as illustrated in FIG. 16.
[00134] FIG. 16 illustrates the use of a latency based forwarding embodiment using dynamic priority destination to more “fairly” forward packets with different static latencies. In DPD, the priority of a packet to be dequeued is dynamically calculated based on the percentage of time into its dequeuing window [tqmin ... tqmax]. As shown in FIG. 16, each packet’s dequeuing priority starts at 0 at tqmin and ends at 1 at tqmax. At top, FIG. 16 again presents a sequence of packets Packet 1 1501 , Packet 2 1502, Packet 3 1503, and Packet 4 1504, but now also shows their dynamic priorities dprio (respectively 1601 , 1602, 1603, 1604) rising from 0 at tqmin for each packet to 1 at tqmax for each packet. At center of FIG. 16 is Packet 5 1505, whose dynamic priority 1605 rises from 0 at its tqmin to 1 at its tqmax. At bottom, FIG. 16 combines the top upper figures to illustrate the dynamic priority 1605 of Packet 5 relative the dynmic priorities of Packets 1 -4. As illustrated along the lowest part FIG. 16, at a number of times during the interval between tqmin of Packet 5 and tqmax of Packet 5, Packet 5 will have the highest dynamic priority. As a result, when comparing which packet at any point in time would be preferred to be sent, Packet 5 1505 now has a fair chance over time to win over Packet 1 1501 , Packet 2 1502, Packet 3 1503, and Packet 4 1504, as shown on the bottom of FIG. 16.
[00135] Considering the LBF forwarding parameters for DPD, these are:
LBF_dest_parameters: tohops, todelay
which can be unchanged from the LBF ESLD, SPD policy, and
LBF_queuing params: tqmin, tqmax, Iqprio
where the tqmin and tqmax calculation can be unchanged from latency based ESLD policy described further above. As with ESLD, and unlike the strict priority destination policy, the dynamic priority destination policy embodiment described here has no Iqprio parameter, but rather uses an internally handled value in its dequeuing policy.
[00136] An embodiment for a DPD dequeuing policy can include:
1 ) Packets are never de-queued/sent before their tqmin time;
2) Packets are never sent, but discarded if they cannot be sent at or before their tqmax time; and
3) When at any point in time the outgoing interface (or other further queue) can accept another packet, the packet that will be is the packet that has:
a) The largest priority dynamically calculated at this point in time (tnow):
dprio = (tnow - tqmin) / (tqmax - tqmin)
b) Amongst those a) packets, the one with the earliest tqmin that meets point 1 and 2.
Point 3 allows latency to be more fairly managed, particularly when traffic on the network is less bursty.
[00137] In the embodiment of FIG. 16 and as illustrated in the equation of 3a) above, the dynamic priority dprio increases linearly from dprio=0 to dprio=1 as tnow goes from tqmin to tqmax, but other functional relationships for dprio=dprio(tnow, tqmin, tqmax) can be used, along with the incorporation of additional parameters for the functional form of dprio. For example, the shown expression for dprio could be raised to a power, to initially rise more slowly for power greater than 1 or to initially rise more quickly for a power of less than one. In other embodiments, a step function could be used where dprio is 0 up to some value intermediate to tqmin and tqmax, after which it could go to 1 or begin ramping up to 1 .
[00138] FIG. 17 is a schematic representation of the queueing/dequeuing machinery of a dynamic priority destination queuing policy for the next hop in a network. FIG. 17 repeats many of the elements of FIG. 13, which are similarly numbered. More specifically, FIG. 17 illustrates multiple ranked queues 1761 , 1763, ... , 1765, ... , 1767, where the packets in each of the queues are ranked based on a rank2, corresponding to tqmin, and the packets within the different queues are ranked according to a rankl . In the DPD queueing policy, rankl now corresponds to the static enqueueing priority eprio = (Imax - Imin). The packets are dequeued by finding the highest rankl packet ready to send and where expired packets are dequeued.
[00139] As illustrated in the embodiment of FIG. 17, there is a PIFO (1761 , 1763, ..., 1765, ..., 1767) for each Iqprio value (or bin of Iqprio values), with the PIFO queues ranked (rank 1 ) by eprio. In FIG. 13, all packets in PIFO 1361 , for example, have the lowest eprio value == highest priority. Within each PIFO, tqmin is the rank function (rank 2), so a packet 1721 gets inserted in the order of its tqmin target dequeuing time. The de-queuing function only need examine the top packet in each PIFO.
[00140] The DPD example of FIG. 17 is like the SPD example of FIG. 13, except that the DPD queuing machinery is based on the static enquiring priority eprio, and the absence of Iqprio as a parameter in LBF_queuing_param. The DPD queuing machinery calculates Iqprio internally, which can be same value as Iqprio in the SPD policy, except that in the case of DPD it is an internal artefact of the machinery implementing DPD, as there is no goal to expose this as a modifiable policy parameter as in SPD. The reason why the Iqprio is used internally is because the actual dynamic dequeuing priority dprio changes over time, so that packets can be placed into bins or buckets such that during dequeuing the machinery need to look only at those packets that are candidates to be sent at the PIFO heads 1771 . For all packets with the same dprio, the machinery only need to look at the one that can be sent the earliest. By using PIFOs with fixed eprio (or range of eprio values), all packets in a single PIFO will have the same eprio and hence at any point in time the same dprio. Consequently, the PIFO will make the earliest sendable packet show up at the head of the PIFO.
[00141] Relative to the SPD dequeuing illustrated with respect to FIG. 13, the dequeuing part of the DPD machinery illustrated in FIG. 17 is changed in so far as that it does not start from the highest priority PIFO (as in SPD), but instead, dequeuing needs to look at all PIFO heads 1771 and find the one with the highest dprio. In the example of FIG. 17, 1791 , 1792, and 1793 are acted upon the same as 1391 , 1392, and 1393 in the SPD policy from FIG. 13, as they are either too early to send (1791 , 1793) or too late to send (1792).
[00142] In FIG. 17, packets 1794 and 1795 will be looked at during dequeuing because these packets can be sent; but given how the priority changes because of current dequeuing time, it could either be 1794 or 1795 that would be dequeued (as described with respect to FIG. 16) depending on their current dprio values.
[00143] The SPD and DPD policies are complementary. As noted, the arrangements of FIG. 13 and FIG. 17 share a number of components. This can allow a network device to be able to implement both SPD and DPD for the different traffic classes with minimal overhead to support both.
[00144] FIG. 18 is a flowchart of one embodiment of the operation of latency based forwarding that can include dynamic priority destination policies. Starting at 1801 , a router or other network node receives LBF packets, such as 821 or 1221 , that have both a network header and an LBF header. Based on this information, a FIB 833, 1223 can determine the number of hops and estimate the fixed transfer times from the database 939, 1239. The node receives the number of hops for each of the packets in 1803 and the estimates of fixed transfer times for each of the packets 1805. (Although typically both of these pieces of information for a packet are received from the FIB 833, 1223 together, these are separated into 1803 and 1805 for purposes of this discussion.)
[00145] At 1807, the node can update the accumulated delay that each of the packets has experienced so far since it has left the sender and, at 1809, a minimum delay is determined for each of the packets. The minimum delay, tqmin, can be determined as described in the embodiments presented above, where, depending on the embodiment, a maximum delay tqmax can also be established. Although shown in a particular order in FIG. 18 for purposes of this discussion, 1803, 1805, 1807, and 1809 can be performed in differing orders and even concurrently depending on the embodiment.
[00146] At 181 1 the node maintains multiple ranked queues as represented at 1741 , where the queues themselves are ranked based on the static enqueueing priority eprio (rankl ) and the packets within each of the queues are ranked based on tqmin (rank2). To determine the queue of each of the packets, at 1813 the packets’ static enqueueing priorities (eprio) are determined. Based on the parameters (tqmin) from 1809, a queuing rank is determined for each of the packets (as illustrated by rank2 in 1741 ) at 1815. A queue is determined for each of the packets based on the parameters (eprio, or rankl ) at 1817. Once the queue and place within the queue are determined for the packets, each of the packets is entered into the determined queue and location at 1819.
[00147] Once entered into the queues, an initial selection of the packets from the ranked queues is made at 1821 from the packets at the head of the queues with tqmin < tnow < tqmax. From the packets of initial selection, their dynamic priorities (dprio) are determined at 1823. These packets can then be sequentially transmitted based on their dynamic priorities at the current time at 1829. Before forwarding, in some embodiments each of the packets are checked at 1825 to see whether it has exceeded its maximum latency and, if so, discarded at 1827. In other embodiments, a late packet can be marked as such and still transmitted at 1829.
[00148] Certain embodiments of the present technology described herein can be implemented using hardware, software, or a combination of both hardware and software. The software used is stored on one or more of the processor readable storage devices described above to program one or more of the processors to perform the functions described herein. The processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and non removable media. By way of example, and not limitation, computer readable media may comprise computer readable storage media and communication media. Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. A computer readable medium or media does not include propagated, modulated, or transitory signals.
[00149] Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term“modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
[00150] In alternative embodiments, some or all of the software can be replaced by dedicated hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application- specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc. In one embodiment, software (stored on a storage device) implementing one or more embodiments is used to program one or more processors. The one or more processors can be in communication with one or more computer readable media/ storage devices, peripherals and/or communication interfaces.
[00151] It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details.
[00152] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[00153] The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.
[00154] The disclosure has been described in conjunction with various embodiments. However, other variations and modifications to the disclosed embodiments can be understood and effected from a study of the drawings, the disclosure, and the appended claims, and such variations and modifications are to be interpreted as being encompassed by the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article“a” or “an” does not exclude a plurality.
[00155] For purposes of this document, it should be noted that the dimensions of the various features depicted in the figures may not necessarily be drawn to scale.
[00156] For purposes of this document, reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or“another embodiment” may be used to describe different embodiments or the same embodiment. [00157] For purposes of this document, a connection may be a direct connection or an indirect connection (e.g., via one or more other parts). In some cases, when an element is referred to as being connected or coupled to another element, the element may be directly connected to the other element or indirectly connected to the other element via intervening elements. When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element. Two devices are“in communication” if they are directly or indirectly connected so that they can communicate electronic signals between them.
[00158] For purposes of this document, the term“based on” may be read as“based at least in part on.”
[00159] For purposes of this document, without additional context, use of numerical terms such as a“first” object, a“second” object, and a“third” object may not imply an ordering of objects, but may instead be used for identification purposes to identify different objects.
[00160] The foregoing detailed description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the subject matter claimed herein to the precise form(s) disclosed. Many modifications and variations are possible in light of the above teachings. The described embodiments were chosen in order to best explain the principles of the disclosed technology and its practical application to thereby enable others skilled in the art to best utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope be defined by the claims appended hereto.
[00161] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims

CLAIMS What is claimed is:
1. A method of transferring packets over a network, comprising:
receiving, at a node, a plurality of packets each including a network header, indicating a network destination for the packet, and a forwarding header, the forwarding header indicating an accumulated delay experienced by the packet since being transmitted by a network sender and a maximum latency for the transfer of the packet from the network sender to the network destination;
updating, by the node, for each of the packets the accumulated delay experienced by the packet that is indicated by the forwarding header;
determining, by the node, for each of the packets a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet;
determining, by the node, for each of the packets a dynamic priority for the packet based at least on the packet’s maximum delay and an amount of time since the packet was received at the node; and
sequentially transmitting the packets by the node in an order based on the dynamic priorities of the packets.
2. The method of claim 1 , further comprising:
determining, by the node, for each of the packets a static priority for the packet based at least on the packet’s maximum delay and the packet’s updated accumulated delay;
determining, by the node, for each of the packets a queueing rank for the packet based at least on the maximum delay;
maintaining by the node a plurality ranked queues of packets for transmission from the node, the plurality of ranked queues being ranked based upon the static priority of packets enqueued therein; and
entering, by the node, each of the packets into one of the ranked queues by: determining, based at least on the packet’s static priority, into which of the ranked queues to enter the packet; and
entering the packet into the determined one of the ranked queues based on the determined queueing rank for the packet, and
wherein sequentially transmitting the packets by the node in the order based on the dynamic priorities of the packets includes:
performing an initial selection of packets from the ranked queues; and ordering packets from the initial selection based upon the dynamic priorities of the packets from the initial selection.
3. The method of claim 2, wherein, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the network sender to the network destination, the method further comprising for each of packets:
determining, by the node, a minimum delay at the node for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet;
further determining the dynamic priority from the minimum delay;
further determining the static priority from the minimum delay; and
further determining the queueing rank from the minimum delay.
4. The method of any of claims 1 -3, wherein, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the network sender to the network destination, the method further comprising for each of packets:
determining, by the node, a minimum delay at the node for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; and
further determining the dynamic priority from the minimum delay.
5. The method of claim 4, further comprising:
determining, for each of the packets, whether the minimum latency exceeds the updated indicated accumulated delay experienced by the packet; and
in response to the packet’s minimum latency exceeding the updated indicated accumulated delay experienced by the packet, discarding the packet.
6. The method of any of claims 1 -5, further comprising:
receiving, at the node, a number of hops from the node to the network destination; and
wherein determining the maximum delay for each of the packets is further based on the number of hops.
7. The method of any of claims 1 -6, further comprising:
receiving, at the node, an estimated amount of time for fixed transfer times between the node and the network destination; and
wherein determining the maximum delay for each of the packets is further based on the estimated amount of time for fixed transfer times between the node and the network destination.
8. A node for transferring packets over a network, comprising:
a network interface configured to receive and forward a plurality of packets over the network;
one or more queues configured to store packets to forward over the network; and
one or more processors coupled to the one or more queues and the network interface, the one or more processors configured to:
receive, from the network interface, a plurality of packets each including a network header, indicating a network destination for the packet, and a forwarding header, the forwarding header indicating an accumulated delay experienced by the packet since being transmitted by a network sender and a maximum latency for the transfer of the packet from the network sender to the network destination;
for each packet, update the accumulated delay experienced by the packet that is indicated by the forwarding header;
enter each of the packets with the updated indicated accumulated delay experienced by the packet into one of the queues;
for each packet, determine a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet;
for each packet, determine a dynamic priority for the packet based at least on the packet’s maximum delay and an amount of time since the packet was received at the node; and
sequentially transmit the packets over the interface from the one or more queues in an order based on the dynamic priorities of the packets.
9. The node of claim 8, wherein the one or more processors are further configured to:
for each packet, determine a static priority from the packet’s maximum delay and the packet’s updated accumulated delay;
rank the plurality of queues based upon the static priority of packets enqueued therein;
for each packet, determine a queueing rank from the maximum delay, enter each of the packets into one of the ranked queues by:
determining, based at least on the packet’s static priority, into which of the ranked queues to enter the packet; and
entering the packet into the determined one of the ranked queues based on the determined queueing rank for the packet, and wherein sequentially transmitting the packets by the node in the order based on the dynamic priorities of the packets includes:
performing an initial selection of packets from the ranked queues; and
ordering packets from the initial selection based upon the dynamic priorities of the packets from the initial selection.
10. The node of claim 9, wherein, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the network sender to the network destination, and wherein, for each of the packets, the one or more processors are further configured to:
determine a minimum delay the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; further determining the dynamic priority from the minimum delay; further determining the static priority from the minimum delay; and further determining the queueing rank from the minimum delay.
11. The node of any of claims 8-10, wherein, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the network sender to the network destination, and wherein, for each of the packets, the one or more processors are further configured to:
determine a minimum delay for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; and
further determine the dynamic priority from the minimum delay.
12. The node of claim 11 , wherein, for each of the packets, the one or more processors are further configured to:
determine whether the minimum latency exceeds the updated indicated accumulated delay experienced by the packet; and
discard the packet in response to the packet’s minimum latency exceeding the updated indicated accumulated delay experienced by the packet.
13. The node of any of claims 8-12, wherein the one or more processors are further configured to:
receive a number of hops from the node to the network destination, and wherein the maximum delay for each of the packets is further determined based on the number of hops.
14. The node of any of claims 8-13, wherein the one or more processors are further configured to:
receive an estimated amount of time for fixed transfer times between the node and the network destination, and
wherein the maximum delay for each of the packets is further determined based the estimated amount of time for fixed transfer times between the node and the network destination.
15. The node of any of claims 8-14, wherein the node is a router.
16. The node of any of claims 8-15, wherein the node is a networking switch.
17. The node of any of claims 8-16, wherein the node is a server.
18. A system for transmitting packets from a sending network device to a receiving network device, comprising:
one or more nodes connectable in series to transfer a plurality of packets from the sending network device to the receiving network device, each of the nodes comprising:
a network interface configured to receive and forward the packets over the network, each of the packets including a network header, indicating the receiving network device, and a forwarding header, indicating an accumulated delay experienced by the packet since being transmitted by the sending network device and a maximum latency for the transfer of the packet from the sending network device to the receiving network;
one or more queues configured to store packets to forward over the network; and
one or more processors coupled to the one or more queues and the network interface, the one or more processors configured to:
receive the plurality of packets from the network interface;
for each packet, the accumulated delay experienced by the packet that is indicated by the forwarding header;
enter each of the packets with the updated indicated accumulated delay experienced by the packet into one of the queues;
for each packet, determine a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet;
for each packet, determine a dynamic priority for the packet based at least on the packet’s maximum delay and an amount of time since the packet was received at the node; and
sequentially transmit the packets over the interface from the one or more queues in an order based on the dynamic priorities of the packets.
19. The system of claim 18, wherein, for each of the nodes, the one or more processors are further configured to:
for each packet, determine a static priority from the packet’s maximum delay and the packet’s updated accumulated delay;
rank the plurality queues based upon the static priority of packets enqueued therein;
for each packet, determine a queueing rank from the maximum delay, enter each of the packets into one of the ranked queues by:
determining, based at least on the packet’s static priority, into which of the ranked queues to enter the packet; and
entering the packet into the determined one of the ranked queues based on the determined queueing rank for the packet, and
wherein sequentially transmitting the packets by the node in the order based on the dynamic priorities of the packets includes:
performing an initial selection of packets from the ranked queues; and
ordering packets from the initial selection based upon the dynamic priorities of the packets from the initial selection.
20. The system of claim 19, wherein, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the sending network device to the receiving network device , and wherein, for each of the packets, the one or more processors of each of the nodes are further configured to:
determine a minimum delay the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet;
further determining the dynamic priority from the minimum delay; further determining the static priority from the minimum delay; and further determining the queueing rank from the minimum delay.
21. The system of any of claims 18-20, wherein, for each of the packets, the forwarding header further indicates a minimum latency for the transfer of the packet from the sending network device to the receiving network device, and wherein, for each of the packets, the one or more processors of each of the nodes are further configured to:
determine a minimum delay for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; and
further determine the dynamic priority from the minimum delay.
22. The system of claim 21 , wherein, for each of the packets, the one or more processors of each of the nodes are further configured to:
determine whether the minimum latency exceeds the updated indicated accumulated delay experienced by the packet; and discard the packet in response to the packet’s minimum latency exceeding the updated indicated accumulated delay experienced by the packet.
23. The system of any of claims 18-22, wherein the one or more processors for each of the nodes are further configured to:
receive a number of hops from the node to the receiving network device , and
wherein the maximum delay for each of the packets is further determined based on the number of hops.
24. The system of any of claims 18-23, wherein the one or more processors of each of the nodes are further configured to:
receive an estimated amount of time for fixed transfer times between the node and the receiving network device , and wherein the maximum delay for each of the packets is further determined based the estimated amount of time for fixed transfer times between the node and the receiving network device.
25. The system of any of claims 18-24, wherein one or more of the nodes are routers.
26. The system of any of claims 18-25, wherein one or more of the nodes are switches.
27. The system of any of claims 18-26, wherein one or more of the nodes are servers.
PCT/US2020/023289 2019-03-19 2020-03-18 Latency based forwarding of packets with dynamic priority destination policies WO2020191014A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962820350P 2019-03-19 2019-03-19
US62/820,350 2019-03-19

Publications (1)

Publication Number Publication Date
WO2020191014A1 true WO2020191014A1 (en) 2020-09-24

Family

ID=70289460

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2020/023289 WO2020191014A1 (en) 2019-03-19 2020-03-18 Latency based forwarding of packets with dynamic priority destination policies
PCT/US2020/023288 WO2020191013A1 (en) 2019-03-19 2020-03-18 Latency based forwarding of packets for service-level objectives (slo) with quantified delay ranges

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/US2020/023288 WO2020191013A1 (en) 2019-03-19 2020-03-18 Latency based forwarding of packets for service-level objectives (slo) with quantified delay ranges

Country Status (1)

Country Link
WO (2) WO2020191014A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113726656A (en) * 2021-08-09 2021-11-30 北京中电飞华通信有限公司 Method and device for forwarding delay sensitive flow
CN114650261A (en) * 2022-02-24 2022-06-21 同济大学 Reordering scheduling method in time-sensitive network queue
CN114793207A (en) * 2021-01-26 2022-07-26 中国移动通信有限公司研究院 Data processing method, device, network boundary equipment and distributed management equipment
CN116192339A (en) * 2023-04-26 2023-05-30 宏景科技股份有限公司 Distributed internet of things data transmission method and system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023234816A1 (en) * 2022-06-03 2023-12-07 Telefonaktiebolaget Lm Ericsson (Publ) Method for handling data communication by providing an indication of a required delivery time (dt) to a packet

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7852763B2 (en) * 2009-05-08 2010-12-14 Bae Systems Information And Electronic Systems Integration Inc. System and method for determining a transmission order for packets at a node in a wireless communication network
US9609543B1 (en) * 2014-09-30 2017-03-28 Sprint Spectrum L.P. Determining a transmission order of data packets in a wireless communication system
WO2018086558A1 (en) * 2016-11-10 2018-05-17 Huawei Technologies Co., Ltd. Network latency scheduling

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9124482B2 (en) * 2011-07-19 2015-09-01 Cisco Technology, Inc. Delay budget based forwarding in communication networks
US10015068B2 (en) * 2013-11-13 2018-07-03 Telefonaktiebolaget Lm Ericsson (Publ) Methods and devices for media processing in distributed cloud

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7852763B2 (en) * 2009-05-08 2010-12-14 Bae Systems Information And Electronic Systems Integration Inc. System and method for determining a transmission order for packets at a node in a wireless communication network
US9609543B1 (en) * 2014-09-30 2017-03-28 Sprint Spectrum L.P. Determining a transmission order of data packets in a wireless communication system
WO2018086558A1 (en) * 2016-11-10 2018-05-17 Huawei Technologies Co., Ltd. Network latency scheduling

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114793207A (en) * 2021-01-26 2022-07-26 中国移动通信有限公司研究院 Data processing method, device, network boundary equipment and distributed management equipment
CN113726656A (en) * 2021-08-09 2021-11-30 北京中电飞华通信有限公司 Method and device for forwarding delay sensitive flow
CN114650261A (en) * 2022-02-24 2022-06-21 同济大学 Reordering scheduling method in time-sensitive network queue
CN116192339A (en) * 2023-04-26 2023-05-30 宏景科技股份有限公司 Distributed internet of things data transmission method and system
CN116192339B (en) * 2023-04-26 2023-07-28 宏景科技股份有限公司 Distributed internet of things data transmission method and system

Also Published As

Publication number Publication date
WO2020191013A1 (en) 2020-09-24

Similar Documents

Publication Publication Date Title
US11362959B2 (en) Latency based forwarding of packets with destination policies
WO2020191014A1 (en) Latency based forwarding of packets with dynamic priority destination policies
CN103986653B (en) Network nodes and data transmission method and system
Jung et al. Intelligent active queue management for stabilized QoS guarantees in 5G mobile networks
US7852763B2 (en) System and method for determining a transmission order for packets at a node in a wireless communication network
EP3588880B1 (en) Method, device, and computer program for predicting packet lifetime in a computing device
Aamir et al. A buffer management scheme for packet queues in MANET
Bracciale et al. Lyapunov drift-plus-penalty optimization for queues with finite capacity
US10237194B2 (en) Maximize network capacity policy with heavy-tailed traffic
Porxas et al. QoS-aware virtualization-enabled routing in software-defined networks
WO2021119675A2 (en) Guaranteed latency based forwarding
CN113726656B (en) Method and device for forwarding delay sensitive flow
Ghaderi et al. Flow-level stability of wireless networks: Separation of congestion control and scheduling
JP7450746B2 (en) Information processing methods, devices, equipment and computer readable storage media
Gao et al. Freshness-aware age optimization for multipath TCP over software defined networks
JP3759734B2 (en) COMMUNICATION SYSTEM, COMMUNICATION DEVICE, AND COMMUNICATION METHOD
Nunez-Martinez et al. Studying practical any-to-any backpressure routing in Wi-Fi mesh networks from a Lyapunov optimization perspective
Catania et al. Flexible UL/DL in small cell TDD systems: A performance study with TCP traffic
Eklund et al. Efficient scheduling to reduce latency for signaling traffic using CMT-SCTP
CN111756557B (en) Data transmission method and device
WO2022073583A1 (en) Distributed traffic engineering at edge devices in a computer network
Orawiwattanakul et al. Fair bandwidth allocation in optical burst switching networks
Li et al. Delay‐aware resource control for device‐to‐device underlay communication systems
Shi et al. PABO: Congestion mitigation via packet bounce
Enzai et al. An overview of scheduling algorithms in mobile ad-hoc networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20719266

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20719266

Country of ref document: EP

Kind code of ref document: A1