WO2020191013A1 - Latency based forwarding of packets for service-level objectives (slo) with quantified delay ranges - Google Patents
Latency based forwarding of packets for service-level objectives (slo) with quantified delay ranges Download PDFInfo
- Publication number
- WO2020191013A1 WO2020191013A1 PCT/US2020/023288 US2020023288W WO2020191013A1 WO 2020191013 A1 WO2020191013 A1 WO 2020191013A1 US 2020023288 W US2020023288 W US 2020023288W WO 2020191013 A1 WO2020191013 A1 WO 2020191013A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- packet
- node
- network
- delay
- latency
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 43
- 238000012546 transfer Methods 0.000 claims description 39
- 235000008694 Humulus lupulus Nutrition 0.000 claims description 25
- 230000006855 networking Effects 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 4
- 230000001934 delay Effects 0.000 abstract description 13
- 238000004891 communication Methods 0.000 description 21
- 238000005516 engineering process Methods 0.000 description 18
- 238000007726 management method Methods 0.000 description 11
- 238000012986 modification Methods 0.000 description 8
- 230000004048 modification Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 7
- 230000003111 delayed effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 239000000872 buffer Substances 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- ODCKICSDIPVTRM-UHFFFAOYSA-N [4-[2-hydroxy-3-(propan-2-ylazaniumyl)propoxy]naphthalen-1-yl] sulfate Chemical compound C1=CC=C2C(OCC(O)CNC(C)C)=CC=C(OS(O)(=O)=O)C2=C1 ODCKICSDIPVTRM-UHFFFAOYSA-N 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000013256 coordination polymer Substances 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012913 prioritisation Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/56—Queue scheduling implementing delay-aware scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/12—Shortest path evaluation
- H04L45/121—Shortest path evaluation by minimising delays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/18—End to end
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2416—Real-time traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/28—Flow control; Congestion control in relation to timing considerations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/28—Flow control; Congestion control in relation to timing considerations
- H04L47/283—Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/56—Queue scheduling implementing delay-aware scheduling
- H04L47/564—Attaching a deadline to packets, e.g. earliest due date first
Definitions
- the disclosure generally relates to communication networks, and more particularly, to the transmission of packets over networks.
- Network packets are formatted units of data carried by packet-mode computer networks.
- High-precision networks demand high-precision service-level guarantees when delivering packets from a sending node on the network, such as a server, router or switch, to a receiving node on the network.
- networks are structured to deliver packets from the sending node to the receiving node as quickly as possible using Quality of Service (QoS) techniques such as prioritization and admission control; however, there are circumstances where this is not the most effective technique for the transfer of packets.
- QoS Quality of Service
- a node for transferring packets over a network includes a network interface configured to receive and forward a plurality of packets over the network and one or more processors coupled to the network interface.
- the one or more processors are configured to: receive, from the network interface, a packet, the packet including a network header, indicating a network destination, and forwarding metadata, the forwarding metadata indicating an accumulated delay metadata and a minimum latency for the transfer of the packet from the network sender to the network destination; determine a minimum delay at the node for the packet based on the minimum latency and the accumulated delay metadata; and forward the packet at a time based on the minimum delay.
- the node further includes one or more ranked queues configured to store packets to forward over the network.
- the one or more processors are coupled to the one or more ranked queues and are further configured to: determine a queueing rank for the packet from the minimum delay; and enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
- the accumulated delay metadata includes an accumulated delay experienced by the packet since being transmitted by a network sender and the one or more processors are further configured to: update the indicated accumulated delay experienced by the packet; and determine a minimum delay at the node for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet.
- the forwarding metadata is a forwarding header.
- the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination.
- the one or more processors are further configured to: determine a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet; and forward the packet at a time based on the minimum delay and the maximum delay.
- the node further includes one or more ranked queues configured to store packets to forward over the network.
- the one or more processors are coupled to the one or more ranked queues and are further configured to: determine a queueing rank for the packet from the minimum delay and the maximum delay; and enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
- the one or more processors are further configured to: access a number of hops from the node to the network destination; and to further determine the minimum delay based on the number of hops.
- the one or more processors are further configured to: access an estimated amount of time for fixed transfer times between the node and the network destination; and to further determine the minimum delay based on the estimated amount of time for fixed transfer times between the node and the network destination.
- the forwarding metadata further indicates a maximum latency for the transfer of the packet from the network sender to the network destination.
- the one or more processors are further configured to discard the packet if the updated indicated accumulated delay experienced by the packet exceeds the maximum latency.
- the forwarding metadata further indicates a maximum latency for the transfer of the packet from the network sender to the network destination.
- the one or more processors are further configured to discard the packet in response to determining that the packet will exceed the maximum latency before reaching the network destination.
- the accumulated delay metadata includes a timestamp.
- the node is a router.
- the node is a server.
- the node is a network switch.
- a method of transferring packets over a network includes: receiving, at a node, a packet including a network header, indicating a network destination, and forwarding metadata including an accumulated delay metadata and a minimum latency for the transfer of the packet from the network sender to the network destination; determining, by the node, a minimum delay at the node for the packet based on the minimum latency and the accumulated delay metadata; and forwarding the packet at a time based on the minimum delay.
- the method also includes: maintaining by the node of one or more ranked queues of packets for forwarding from the node. Forwarding the packet at a time based on the minimum delay includes: determining, by the node, a queueing rank for the packet from the minimum delay; and entering the packet into one of the ranked queues based on the determined queueing rank for the packet.
- the method also includes: accessing, at the node, a number of hops from the node to the network destination. Determining the minimum delay at the node for the packet is further based on the number of hops.
- the method also includes: accessing, at the node, an estimated amount of time for fixed transfer times between the node and the network destination. Determining the minimum delay at the node for the packet is further based on the estimated amount of time for fixed transfer times between the node and the network destination.
- the forwarding metadata is a forwarding header.
- the accumulated delay metadata includes an accumulated delay experienced by the packet since being transmitted by a network sender and the one or more processors, and the method further includes updating, by the node, the indicated accumulated delay experienced by the packet, wherein determining the minimum delay at the node for the packet is based on the minimum latency and the accumulated delay metadata.
- the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination, and the method further comprises: discarding the packet if the updated indicated accumulated delay experienced by the packet exceeds the maximum latency.
- the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination, and the method further comprises: discarding the packet in response to determining that the packet will exceed the maximum latency before reaching the network destination.
- the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination. The method also includes: determining, by the node, a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet, and forwarding the packet at a time based on the minimum delay and the maximum delay.
- the method also includes: maintaining by the node of one or more ranked queues of packets for forwarding from the node. Forwarding the packet at a time based on the minimum delay includes: determining, by the node, a queueing rank for the packet from the minimum delay and the maximum delay; and entering the packet into one of the ranked queues based on the determined queueing rank for the packet.
- a system for transmitting packets from a sending network device to a receiving network device includes one or more nodes connectable in series to transfer a packet from the sending network device to the receiving network device.
- Each of the nodes includes: a network interface configured to receive and forward the packet over the network, the packet including a network header, indicating the receiving network device, and a forwarding header, indicating an accumulated delay experienced by the packet since being transmitted by the sending network device and a minimum latency for the transfer of the packet from the sending network device to the receiving network device; and one or more processors coupled to the one or more ranked queues and the network interface.
- the one or more processors are configured to: receive the packet from the network interface; update the indicated accumulated delay experienced by the packet; determine a minimum delay at the node for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; and forward the packet at a time based on the minimum delay.
- each of the one or more nodes further comprises: one or more ranked queues configured to store packets to forward over the network.
- the one or more are coupled to the one or more ranked queues and are further configured to: determine a queueing rank for the packet from the minimum delay; and enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
- the one or more processors are further configured to: access a number of hops between the node to the receiving network device; and to further determine the minimum delay based on the number of hops.
- the one or more processors are further configured to: access an estimated amount of time for fixed transfer times between the node to the receiving network device; and to further determine the minimum delay based on the estimated amount of time for fixed transfer times between the node and the receiving network device.
- the forwarding header further indicates a maximum latency for the transfer of the packet from the sending network device to the receiving network device.
- the one or more processors are further configured to: determine a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet; and forward the packet at a time based on the minimum delay and the maximum delay.
- each of the one or more nodes further comprises one or more ranked queues configured to store packets to forward over the network.
- the one or more are coupled to the one or more ranked queues and are further configured to: determine a queueing rank for the packet from the minimum delay and the maximum delay; and enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
- the forwarding header further indicates a maximum latency for the transfer of the packet from the sending network device to the receiving network device.
- the one or more processors are further configured to discard the packet if the updated indicated accumulated delay experienced by the packet exceeds the maximum latency.
- one or more of the nodes is a router.
- one or more of the nodes is a networking switch.
- one or more of the nodes is a server.
- FIGs. 1A-1 E illustrate the concept of latency-based forwarding of packets for service level objectives (SLO).
- FIG. 2 illustrates an exemplary communication system for communicating data.
- FIG. 3 illustrates an exemplary network of a series of nodes, such as routers, which may be included in one of the networks shown in FIG. 2.
- FIG. 4 is flowchart for one embodiment for the latency based forwarding of packets as illustrated in the example FIG. 3.
- FIG. 5 is a schematic diagram illustrating exemplary details of a network device, or node, such as shown in the network of FIG. 3.
- FIG. 6 provides an example of a network in which latency based forwarding (LBF) of packets can be implemented.
- LPF latency based forwarding
- FIG. 7 is a high level overview for an embodiment of end-to-end latency based forwarding.
- FIG. 8 considers the node behavior for a pair of nodes from FIG. 7 in more detail.
- High-precision networks demand high-precision service-level guarantees that can be characterized through a set of Service Level Objectives (SLOs), which are performance goals for a service under certain well-defined constraints.
- SLOs Service Level Objectives
- a delay, or latency, based SLO can indicate a specific end-to-end latency, given a certain not-to- exceed packet rate or bandwidth.
- FIG. 1A illustrates the concept of a latency-based forwarding of packets over a network to meet a SLO, with several examples illustrated in FIGs. 1 B-1 E.
- FIG. 1A illustrates a range of delays for the delivery of a packet over a network from a sending network device to a receiving network device.
- the SLO indicates a lower bound, or SLO lb, that is the earliest a packet is to reach its destination; an upper bound, or SLO ub, that is the latest a packet is to reach its destination; or a target latency window defined by both an SLO lb and an SLO ub. Together, these define an SLO range for the delivery of a packet over a network.
- examples of applications where in-time guarantees can be of use are in Virtual Reality/Augmented Reality (VR/AR), which can have stringent limits on the maximum motion-to-photon time, such as to avoid dizziness and reduced quality of experience that can result from longer delays and may severely reduce user acceptance.
- VR/AR Virtual Reality/Augmented Reality
- Another example is for Tactile Internet having stringent limits to delay for haptic feedback, as a lack of sensation of being in control or sluggish control would make many applications infeasible.
- Further examples can include industrial controllers, that can have stringent limits to feedback control loops, and applications such as vehicle to everything (V2X), remote-controlled robots and drones, and similar cases.
- V2X vehicle to everything
- On-time guarantees which are stronger than in-time guarantees, can be used when application buffers cannot be assumed.
- On- time guarantees can provide fairness by not giving anyone an unfair advantage in multiparty applications and marketplaces, such as for trading or gaming (including those involving tactile internet).
- On-time guarantees can also be useful for synchronization in examples such as robot collaboration (e.g., lifting a packet by two remotely controlled robots) or high-precision measurements (e.g., remote polling at exact intervals).
- FIG. 1 E illustrates the “best effort” model, such as used in previous approaches.
- the network just delivers a packet as soon as it propagates through the network without the specification of a SLO.
- This can be contrasted to the specification of a SLO having an indicated lower bound, indicated upper bound, or both, as in the examples of FIG. 1 B and FIG. 1 D.
- the techniques presented in the following discussion provide a system that delivers packets that traverse a network in accordance with a quantified delay SLO.
- the SLO indicates a delay range with quantifiable lower and upper bounds that can be varied for each individual packet.
- Previous networking technologies do not provide this capability, but are instead typically engineered to“minimize” delay by using a range of techniques ranging from dimensioning links to reserving resources and performing admission control functions. These previous approaches are not engineered to hit a specific quantified delay or target, and there is no networking algorithm that would hit that delay as part of a function of the network itself.
- the technology presented here provides the capability to do this without need for centralized coordination and control logic, but in a way that is performed“in-network”, thereby reducing controller dependence.
- the technology presented here further does so in a way that keeps the buffers of egress edge devices small (to reduce cost) and in a way that SLO is adhered to for a“first packet” (and does not require connection setup / handshake).
- the embodiments presented here include a network with network nodes which perform a distributed algorithm that can deliver packets in accordance with a delay SLO with quantifiable lower and upper delay bounds.
- the distributed algorithm processes a packet on each node as it traverses the network following a local algorithm that: measures the delay that has been incurred by the packet since it was sent by the source; determines the remaining delay budget, based on SLO, delay, and prediction of downstream delay; and speeds up or slows down the packet per an action that best fits the budget.
- Possible actions include matching queue delay to action, and selecting from a set of downstream paths based on expected delays or buffering.
- a packet may be dropped or discarded.
- the communication system 100 includes, for example, user equipment 1 10A, 1 10B, and 1 10C, radio access networks (RANs) 120A and 120B, a core network 130, a public switched telephone network (PSTN) 140, the Internet 150, and other networks 160. Additional or alternative networks include private and public data-packet networks, including corporate intranets. While certain numbers of these components or elements are shown in the figure, any number of these components or elements may be included in the system 100.
- RANs radio access networks
- PSTN public switched telephone network
- the communication system 100 can include a wireless network, which may be a fifth generation (5G) network including at least one 5G base station which employs orthogonal frequency-division multiplexing (OFDM) and/or non- OFDM and a transmission time interval (TTI) shorter than 1 milliseconds (e.g. 100 or 200 microseconds), to communicate with the communication devices.
- 5G fifth generation
- a base station may also be used to refer to any of the eNB and the 5G BS (gNB).
- the network may further include a network server for processing information received from the communication devices via the at least one eNB or gNB.
- System 100 enables multiple users to transmit and receive data and other content.
- the system 100 may implement one or more channel access methods, such as but not limited to code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), or single-carrier FDMA (SC-FDMA).
- CDMA code division multiple access
- TDMA time division multiple access
- FDMA frequency division multiple access
- OFDMA orthogonal FDMA
- SC-FDMA single-carrier FDMA
- the user equipment (UE) 1 10A, 1 10B, and 1 10C which can be referred to individually as an UE 1 10, or collectively as the UEs 1 10, are configured to operate and/or communicate in the system 100.
- an UE 1 10 can be configured to transmit and/or receive wireless signals or wired signals.
- Each UE 1 10 represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device, wireless transmit/receive unit (UE), mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, wireless sensor, wearable devices, consumer electronics device, device-to-device (D2D) user equipment, machine type user equipment or user equipment capable of machine-to-machine (M2M) communication, iPads, Tablets, mobile terminals, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, or other non-limiting examples of user equipment or target device.
- PDA personal digital assistant
- D2D device-to-device
- M2M machine type user equipment or user equipment capable of machine-to-machine
- M2M machine-to-machine
- iPads, Tablets mobile terminals
- laptop embedded equipped (LEE) laptop mounted equipment
- USB dongles or other non-limiting examples of user equipment or target device
- the RANs 120A, 120B include one or more base stations (BSs) 170A, 170B, respectively.
- the RANs 120A and 120B can be referred to individually as a RAN 120, or collectively as the RANs 120.
- the base stations (BSs) 170A and 170B can be referred to individually as a base station (BS) 170, or collectively as the base stations (BSs) 170.
- Each of the BSs 170 is configured to wirelessly interface with one or more of the UEs 1 10 to enable access to the core network 130, the PSTN 140, the Internet 150, and/or the other networks 160.
- the base stations (BSs) 170 may include one or more of several well-known devices, such as a base transceiver station (BTS), a Node-B (NodeB), an evolved NodeB (eNB), a next (fifth) generation (5G) NodeB (gNB), a Home NodeB, a Home eNodeB, a site controller, an access point (AP), or a wireless router, or a server, router, switch, or other processing entity with a wired or wireless network.
- BTS base transceiver station
- NodeB Node-B
- eNB evolved NodeB
- gNB next (fifth) generation
- gNB next (fifth) generation
- gNB next (fifth) generation
- gNB next (fifth) generation
- gNB next (fifth) generation
- gNB next (fifth) generation
- gNB next (fifth) generation
- gNB next (
- the BS 170A forms part of the RAN 120A, which may include one or more other BSs 170, elements, and/or devices.
- the BS 170B forms part of the RAN 120B, which may include one or more other BSs 170, elements, and/or devices.
- Each of the BSs 170 operates to transmit and/or receive wireless signals within a particular geographic region or area, sometimes referred to as a“cell.”
- MIMO multiple-input multiple-output
- the BSs 170 communicate with one or more of the UEs 1 10 over one or more air interfaces (not shown) using wireless communication links.
- the air interfaces may utilize any suitable radio access technology.
- the system 100 may use multiple channel access functionality, including for example schemes in which the BSs 170 and UEs 1 10 are configured to implement the Long Term Evolution wireless communication standard (LTE), LTE Advanced (LTE-A), and/or LTE Multimedia Broadcast Multicast Service (MBMS).
- LTE Long Term Evolution wireless communication standard
- LTE-A LTE Advanced
- MBMS LTE Multimedia Broadcast Multicast Service
- the base stations 170 and user equipment 1 10A- 1 10C are configured to implement UMTS, HSPA, or HSPA+ standards and protocols.
- other multiple access schemes and wireless protocols may be utilized.
- the RANs 120 are in communication with the core network 130 to provide the UEs 1 10 with voice, data, application, Voice over Internet Protocol (VoIP), or other services.
- VoIP Voice over Internet Protocol
- the RANs 120 and/or the core network 130 may be in direct or indirect communication with one or more other RANs (not shown).
- the core network 130 may also serve as a gateway access for other networks (such as PSTN 140, Internet 150, and other networks 160).
- some or all of the UEs 1 10 may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols.
- the RANs 120 may also include millimeter and/or microwave access points (APs).
- the APs may be part of the BSs 170 or may be located remote from the BSs 170.
- the APs may include, but are not limited to, a connection point (an mmW CP) or a BS 170 capable of mmW communication (e.g., a mmW base station).
- the mmW APs may transmit and receive signals in a frequency range, for example, from 24 GHz to 100 GHz, but are not required to operate throughout this range.
- the term base station is used to refer to a base station and/or a wireless access point.
- FIG. 2 illustrates one example of a communication system
- the communication system 100 could include any number of user equipments, base stations, networks, or other components in any suitable configuration.
- user equipment may refer to any type of wireless device communicating with a radio network node in a cellular or mobile communication system.
- the networks 130, 140, 150 and/or 160 will commonly transfer data as packets, in which network packets are formatted units of data carried by a packet mode computer networks.
- the embodiments presented below are primarily concerned with the transmission of such packets over networks and the management of latencies of such transmissions.
- FIG. 3 illustrates an example network that includes networking devices 210a, 210b, 210c, 21 Od, and 21 Oe, which can be routers, networking switches, servers, or other networking devices.
- networking device 210a could be a server sending packets, 210b, 210c, and 21 Od routers, and 21 Oe an edge device.
- these networking devices will often be referred to as nodes, but it will be understood that each of these can be various networking devices.
- each of the nodes 210a, 210b, 210c, 21 Od, and 21 Oe can be referred to as a node 210, or which can be collectively referred to as the nodes 210. While only five nodes 210 are shown in FIG. 3, a network would likely include significantly more than four nodes 210. In much of the following discussion, the nodes will sometimes be referred to a routers, but will be understood to more generally be nodes.
- FIG. 3 also illustrates a network control/management plane 212 that is communicatively coupled to each of the routers or nodes 210 of the network, as represented in dashed lines.
- the control/management plane 212 can be used to perform a variety of control path and/or control plane functions.
- the control/management plane of a network is the part of the node architecture that is responsible for collecting and propagating the information that will be used later to forward incoming packets. Routing protocols and label distribution protocols are parts of the control plane.
- FIG. 3 can be used to illustrate the management of latencies of packets as transmitted from a sending network device, or node, 210a over the intermediate nodes 21 Ob-21 Od to a receiving network device, or node, 21 Oe.
- FIG. 3 presents an example where the service level objective (SLO) indicates an end-to-end delay or latency with a lower bound of lb milliseconds and an upper bound of ub milliseconds and where lb and ub are both taken to be the same (an“on-time guarantee”) at 8ms.
- the total latency will be the sum of the fixed delays between each of the nodes and the local delays in each of the intermediate nodes.
- SLO service level objective
- the delay for a packet to travel between node 210a and node 210b is 1 ms and the delays between node 210b and node 210c, between node 210c and node 21 Od, and between node 21 Od and 21 Oe are all 500ps.
- the control/management plane 212 can notify each node of the number of remaining nodes, or hops, and the predicted propagation delay within each of the nodes. As described in more detail below, based upon this information and the amount of delay that the packet has experienced so far, the node can determine a local delay budget for the packet in the node.
- the node 210b can determine when to transmit the packet based on this budget.
- the latency budget is similarly determined for node 210c based upon 8ms total delay, a delay so far of (1 .375+1 +1 .375+.500)ms, predicted additional delay of 1 ms, giving a latency budget for node 210c of:
- the latency budget is similarly calculated as:
- the local latency budgets can be adjusted accordingly. For example, if there were 1 ms additional unexpected delay related to node 210b, either arising on node 210b itself or during propagation between node 210b and 210c, this loss can be taken out of local latency budgets of nodes 210c and 21 Od. Revising the calculation of the previous paragraph to add in this extra 1 ms delay, the local latency budget of 210c becomes:
- the local latency budget of 21 Od becomes:
- latency-in-packet corresponds to the cumulative amount of delay or latency already experienced by the packet since leaving its source
- path- delay-to-destination is the expected amount of fixed transmission delay before the packet reaches its destination node.
- FIG. 4 is flowchart for one embodiment for the latency based forwarding of packets as illustrated in the example FIG. 3.
- a packet is received at an intermediate node, such as one of nodes 210b, 210c, or 21 Od in FIG. 3.
- the packet s service level objective (SLO) can be determined from the packet’s header 303 and the delay experienced by the packet so far is assessed at 305.
- SLO service level objective
- the SLO of a packet can differ between packets and can be maintained in a packet’s forwarding header or other forwarding metadata and is determined by the node at 303.
- the SLO can indicate one or both of an upper bound and a lower bound for the total latency, or delay, for the packet as it is transmitted from the sending node (210a in FIG. 3) to the receiving node (21 Oe in FIG. 3).
- the packet can also carry information on the accumulated delay metadata, such as the amount of accumulated delay or latency experienced by the packet so far since it left the sending node. (In much of the following, latency and delay are used interchangeably in this context.)
- the node assesses the delay and can also update the delay before passing the packet on to the next node.
- the accumulated delay metadata can be a timestamp, where the accumulated delay can be assessed based on the difference between the current time and packet’s starting time, where the packet can carry its sending time (as a timestamp) the delay can be obtained by subtracting the sent time from the received time. This embodiment uses network time synchronization, but can keep the packet contents unaltered.
- the packet can be changed to update the cumulative latency, where this approach does not require the synchronization of time across the different nodes.
- the node can instead update the remaining SLO.
- the node can determine the delay budget for the packet. As illustrated in FIG. 3, where the number of hops and delay prediction information is provided by the control/management plane 212, at 313 a path predictor can also supply path propagation and delay predictions, such as the number of hops to the destination and the fixed delays between these hops. With respect to receiving the number of hops and fixed delays, a node can access this information in various ways in order to make better latency based forwarding decisions depending on the embodiment. Depending on the embodiment, this information can be stored/maintained on the node itself. In some embodiments, this information can be configured/provisioned using a control or management application.
- a node can receive, or be aware, of this information by way of a forwarding information database (FIB) from where it is disseminated using a separate control plane mechanism (IGP, provisioning at the node via a controller, etc.).
- FIB forwarding information database
- the assessment of 307 can be based on the inputs of the remaining path information of: number of nodes remaining; the fixed delay for remainder of path, which can be computed from the number of links remaining with propagation delay and the possible number of nodes with fixed minimum processing delay; and receive information precomputed by the control/management plane 212 and disseminated along with path information by the control/management plane 212.
- the output of the assessment of 307 is the delay budget.
- the fixed latencies and the current delay can be subtracted from the SLO, which can then be divided by the number of remaining nodes, as described above with respect to the local delay budgets of the nodes 210b, 210c, and 21 Od in FIG. 3.
- the target latency or delay at the node can be based on the midpoint between lower bound and upper bound as determined from the packet’s SLO at 303.
- an adjustment step can be included, such as lowering the budget value to be closer to the lower bound if there are a large number of nodes remaining that could increase the likelihood of an unexpected delay.
- the node can take a quality of service (QoS) action. For example, the node can maintain one or more queues in which it places packets ready for forwarding and then select a queue and a placement within the queue whose expected delay is the closest match for the packet’s target delay budget (e.g., the first queue whose delay is less than or equal to the target delay).
- target delay budget e.g., the first queue whose delay is less than or equal to the target delay.
- the node can assess a queue’s latency as a function of queue occupancy, as well as other options, such as through the use of defined delay queues, for example. If the target delay budget is negative, a packet will miss its SLO.
- the node could: discard or drop the packet; mark the packet as late, so that nodes downstream no longer need to prioritize the packet; or record an SLO violation in a statelet (e.g. update counter) of the packet.
- the QoS action could include speeding up or slowing down a packet, or forwarding along a slower vs a faster path.
- the packet is forwarded on the next node of its path. For example, after being entered into a selected queue at a location based on its delay budget at 308, the packet would work its way up the queue until it is transmitted over the network.
- FIG. 5 is a schematic diagram illustrating exemplary details of a node 400, such as a router, switch, server or other network device, according to an embodiment.
- the node 400 can correspond to one of the nodes 210a, 210b, 210c, 21 Od, or 21 Oe of FIG. 3.
- the router or other network node 400 can be configured to implement or support embodiments of the present technology disclosed herein.
- the node 400 may comprise a number of receiving input/output (I/O) ports 410, a receiver 412 for receiving packets, a number of transmitting I/O ports 430 and a transmitter 432 for forwarding packets. Although shown separated into an input section and an output section in FIG.
- I/O input/output
- I/O ports 410 and 430 that are used for both down stream and up-stream transfers and the receiver 412 and transmitter 432 will be transceivers.
- I/O ports 410, receiver 412, I/O ports 430, and transmitter 432 can be collectively referred to as a network interface that is configured to receive and transmit packets over a network.
- the node 400 can also include a processor 420 that can be formed of one or more processing circuits and a memory or storage section 422.
- the storage 422 can be variously embodied based on available memory technologies and in this embodiment is shown to have a cache 424, which could be formed from a volatile RAM memory such as SRAM or DRAM, and long-term storage 426, which can be formed of non-volatile memory such as flash NAND memory or other memory technologies.
- Storage 422 can be used for storing both data and instructions for implementing the packet forwarding techniques described here.
- programmable content forwarding plane 428 can be part of the more general processing elements of the processor 420 or a dedicated portion of the processing circuitry.
- the processor(s) 420 can be configured to implement embodiments of the present technology described below.
- the memory 422 stores computer readable instructions that are executed by the processor(s) 420 to implement embodiments of the present technology. It would also be possible for embodiments of the present technology described below to be implemented, at least partially, using hardware logic components, such as, but not limited to, Field- programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc.
- FPGAs Field- programmable Gate Arrays
- ASICs Application-specific Integrated Circuits
- ASSPs Application-Specific Standard Products
- SOCs System-on-a-chip systems
- CPLDs Complex Programmable Logic Devices
- FIG. 6 provides an example of a network in which latency based forwarding of packets can be implemented. More specifically, FIG. 6 illustrates an aggregation ring, which is a common metropolitan broadband/mobile-access topology. In the example of FIG.
- each of six ring routers (RA 501 , RB 503, RC 505, RD 507, RE 509, RF 51 1 ) is connected to 100 access nodes, spoke routers (RaO 501 -0 to Ra99 501 -99, RbO 503-0 to Rb99 503-99, RcO 505-0 to Rc99 505-99, RdO 507-0 to Rd99 507-99, ReO 509-0 to Re99 509-99, RfO 51 1 -0 to Rf99 51 1 -99).
- spoke routers (RaO 501 -0 to Ra99 501 -99, RbO 503-0 to Rb99 503-99, RcO 505-0 to Rc99 505-99, RdO 507-0 to Rd99 507-99, ReO 509-0 to Re99 509-99, RfO 51 1 -0 to Rf99 51 1 -99).
- a packet is sent form the sending node Ra-0 501 -0 to the receiving node Re-0 509-0, traversing the ring nodes of routers RA 501 , RB 503, RC 505, RD 507, and RE 509.
- the latency based forwarding introduced here allows for a packet with a lower delay SLO to be queued in front of packets with a higher delay SLO.
- the latency based SLO can be tuned so that a network can provide fairer/more-equal delay across rings independently of how far away in the ring a sender and receivers are located. For example, minimum-delay can be set to be larger than the worst-case “across-ring” delay.
- FIG. 7 is a high level overview for an embodiment of end-to-end latency based forwarding (LBF) 600.
- latency based forwarding provides a machinery for an end-to-end network consisting of a sending network device, or sender, RS 601 , receiving network device, or receiver, RR 609 and one or more intermediate or forwarding nodes.
- three intermediate nodes RA 603, RB 605 and RC 607 are shown.
- the fixed latency for transmission between a pair of nodes is 1 ms, and each of the intermediate nodes adds a delay of an LBF queue latency.
- the locally incurred latency at the node is added to the total delay incurred so far, so that it can be used by the subsequent node as one of the inputs to make its decision.
- the total end-to-end latency between the sending router or node RS 601 and the receiving router or node RR 609 is managed.
- a packet 621 includes a destination header that indicates its destination, RR 609 in this example. This destination header is used by each forwarding node RA 603, RB 605, RC 607 to steer packet 621 to the next forwarding node or final receiver RR 609.
- a forwarding header edelay, Imin and Imax.
- the edelay parameter allows for each forwarding node (RA 603, RB 605, RC 607) to determine the difference in time (latency) between when the node receives the packet and when the sender RS 601 has sent the packet.
- the edelay parameter is the latency, or delay, encountered so far. It is updated at each node, which adds the latency locally incurred so far plus the known outgoing link latency to the next hop.
- a sender timestamp is added once by the sender RS 601 , where subsequent nodes compute the latency (edelay) incurred so far by subtracting the sending timestamp from the current time.
- the forwarding nodes RA 603, RB 605, RC 607 do not need to update the field, but this method does require a time-synchronized network. In further embodiments, a desired time of arrival could also be indicated.
- the parameters Imin and Imax are respectively an end-to-end minimum and maximum latency for the Service Level Objectives (SLOs).
- SLOs Service Level Objectives
- the latency with which the final receiving node RR 609 receives the packet is meant to be between the minimum and maximum latency values Imin and Imax.
- FIG. 8 considers the node behavior for a pair of nodes from FIG. 7 in more detail.
- FIG. 8 illustrates two of the latency based forwarding nodes of FIG. 7, such as RA 603 and RB 605, and the resource manager 61 1 .
- Each of the nodes RA 603 and RB 605 includes a control plane 71 1 , 712 (such as based on an internet gateway protocol, IGP), a forwarding plane 731 ,732 and a latency based forwarding protocol queue or queues 741 ,742 in which the packets are placed for the next hop.
- IGP internet gateway protocol
- embodiments of latency based forwarding can be very general in describing how forwarding nodes RA 603, RB 605, RC 607 can achieve this forwarding goal.
- a centralized resource manager 61 1 can provide control/policy/data to forwarding nodes RA 603, RB 605, RC 607 and/or a distributed mechanism.
- the number of hops from the current node to the destination and/or the minimal latency to the destination can be accessed, such as by being communicated by a “control plane” 71 1 ,712 (e.g., a protocol such as IGP or provisioned through a controller, for example).
- this information can be added to a forwarding information database, or FIB, along with other information such as the next hop.
- a forwarding plane 731 , 732 can be used to help steer the packet 621 on every forwarding node to the next-hop according to the packet’s destination parameter.
- the packets With the LBF queue for the next hop 741 , 743, the packets will have updated edelay values 743, 744 that are provided to the forwarding plane of the next LBF node.
- a packet can be entered into the queue based on its delay budget. If the edelay value of a packet is over the maximum latency, the packet can be discarded.
- the LBF queue for the next hop 741 , 743 can be one or multiple queues and, for embodiments with multiple queues, the queues can be ranked or un-ranked.
- Embodiments may or may not use path prediction and the determination of a forwarding delay per hop based on Imin, Imax, and edelay can be determined by differing algorithms.
- Some embodiments can include the early discard of packets if the delay experienced so exceeds Imax for the packet.
- the physical propagation delay i.e. , non-queueing delays due the time for a packet to propagate between the nodes of the hops on a path to the destination
- early discard at a node can be based on the sum of the experienced delay and remaining physical propagation delay exceeding Imax.
- the latency based forwarding techniques can also be used to affect congestion delays in a system such as illustrated in FIG. 6. Under congestion, packets that have more time to reach their destination because of Imax and path prediction can be delayed longer than packets that need to be forwarded with less delay to make it in time for their Imax to their destination.
- the packet p1 can be propagated more rapidly at the intermediate nodes to reduce congestion and allow the packet p1 to move through the nodes of the network with less delay relative to p2, which can better afford to be delayed as it may have fewer hops to traverse, for example.
- packets can be delayed hop by hop as the Imin value can affect the delay at a node even without any load on the network. Packets with higher Imin values are typically delayed in a node more than those with lower Imin values, so that packets should not arrive too early at further downstream nodes. For example, given two paths to a destination with the same predicted delay, but where one has more hops, then packets with the same Imin will see less delay per node in paths with more hops: When more hops are needed to forward a packet, each node can spend less time processing the packet.
- congestion does result in packets being discarded, this can be based on the packets’ respective Imax values and path predictions.
- the one or more processors of the controller circuitry can preferentially discard those packets that which it is determined that they would not make to the corresponding network destination within the allotted Imax time because of the added congestion delay on this node (that is, the packets would not be discarded early on this node if there was no congestion).
- processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and non removable media.
- computer readable media may comprise computer readable storage media and communication media.
- Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
- Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
- a computer readable medium or media does not include propagated, modulated, or transitory signals.
- Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
- some or all of the software can be replaced by dedicated hardware logic components.
- illustrative types of hardware logic components include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc.
- FPGAs Field-programmable Gate Arrays
- ASICs Application-specific Integrated Circuits
- ASSPs Application-specific Standard Products
- SOCs System-on-a-chip systems
- CPLDs Complex Programmable Logic Devices
- special purpose computers etc.
- software stored on a storage device
- the one or more processors can be in communication with one or more computer readable media/ storage devices, peripherals and/or communication interfaces.
- a connection may be a direct connection or an indirect connection (e.g., via one or more other parts).
- the element when an element is referred to as being connected or coupled to another element, the element may be directly connected to the other element or indirectly connected to the other element via intervening elements.
- the element When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element.
- Two devices are“in communication” if they are directly or indirectly connected so that they can communicate electronic signals between them.
- the term“based on” may be read as“based at least in part on.”
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Latency Based Forwarding (LBF) techniques are presented for the management of the latencies, or delays, of packets forwarded over nodes, such as routers, of a network in order to meet an end-to-end latency objective. In addition to a network header indicating a destination node, a packet also includes LBF metadata, such a header, indicating the packet's accumulated delay since leaving the sender and maximum and minimum latency objectives for the entire journey from the sender to the receiver. When a packet is received at a node, based on the accumulated delay, the maximum latency, the minimum latency, and the expectation of latency and number nodes that will be encountered further in the path, the node places the packet in a forwarding queue to manage the delays between the sender and the receiver.
Description
LATENCY BASED FORWARDING OF PACKETS FOR SERVICE-LEVEL OBJECTIVES (SLO) WITH QUANTIFIED DELAY RANGES
Inventors:
Alexander Clemm
Toerless Tobias Eckert
Uma S. Chunduri
Padmadevi Pillay-Esnault
RELATED APPLICATION
[0001] This application claims priority to U.S. Provisional Patent Application No. 62/820,350, filed March 19, 2019, which is hereby incorporated by reference into the present application.
TECHNICAL FIELD
[0002] The disclosure generally relates to communication networks, and more particularly, to the transmission of packets over networks.
BACKGROUND
[0003] Network packets are formatted units of data carried by packet-mode computer networks. High-precision networks demand high-precision service-level guarantees when delivering packets from a sending node on the network, such as a server, router or switch, to a receiving node on the network. Traditionally, networks are structured to deliver packets from the sending node to the receiving node as quickly as possible using Quality of Service (QoS) techniques such as prioritization and admission control; however, there are circumstances where this is not the most effective technique for the transfer of packets.
SUMMARY
[0004] According to one aspect of the present disclosure, a node for transferring packets over a network includes a network interface configured to receive and forward a plurality of packets over the network and one or more processors coupled to the
network interface. The one or more processors are configured to: receive, from the network interface, a packet, the packet including a network header, indicating a network destination, and forwarding metadata, the forwarding metadata indicating an accumulated delay metadata and a minimum latency for the transfer of the packet from the network sender to the network destination; determine a minimum delay at the node for the packet based on the minimum latency and the accumulated delay metadata; and forward the packet at a time based on the minimum delay.
[0005] Optionally, in the preceding aspect, the node further includes one or more ranked queues configured to store packets to forward over the network. The one or more processors are coupled to the one or more ranked queues and are further configured to: determine a queueing rank for the packet from the minimum delay; and enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
[0006] Optionally, in any of the preceding aspects, the accumulated delay metadata includes an accumulated delay experienced by the packet since being transmitted by a network sender and the one or more processors are further configured to: update the indicated accumulated delay experienced by the packet; and determine a minimum delay at the node for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet.
[0007] Optionally, in the preceding aspect, the forwarding metadata is a forwarding header.
[0008] Optionally, in the preceding aspect, the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination. The one or more processors are further configured to: determine a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet; and forward the packet at a time based on the minimum delay and the maximum delay.
[0009] Optionally, in the preceding aspect, the node further includes one or more ranked queues configured to store packets to forward over the network. The one or more processors are coupled to the one or more ranked queues and are further configured to: determine a queueing rank for the packet from the minimum delay and the maximum delay; and enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
[0010] Optionally, in any of the preceding aspects, the one or more processors are further configured to: access a number of hops from the node to the network destination; and to further determine the minimum delay based on the number of hops.
[0011] Optionally, in any of the preceding aspects, the one or more processors are further configured to: access an estimated amount of time for fixed transfer times between the node and the network destination; and to further determine the minimum delay based on the estimated amount of time for fixed transfer times between the node and the network destination.
[0012] Optionally, in any of the preceding aspects, the forwarding metadata further indicates a maximum latency for the transfer of the packet from the network sender to the network destination. The one or more processors are further configured to discard the packet if the updated indicated accumulated delay experienced by the packet exceeds the maximum latency.
[0013] Optionally, in any of the preceding aspects, the forwarding metadata further indicates a maximum latency for the transfer of the packet from the network sender to the network destination. The one or more processors are further configured to discard the packet in response to determining that the packet will exceed the maximum latency before reaching the network destination.
[0014] Optionally, in any of the preceding aspects, the accumulated delay metadata includes a timestamp.
[0015] Optionally, in any of the preceding aspects, the node is a router.
[0016] Optionally, in any of the preceding aspects, the node is a server.
[0017] Optionally, in any of the preceding aspects, the node is a network switch.
[0018] According to second set of aspects of the present disclosure, a method of transferring packets over a network includes: receiving, at a node, a packet including a network header, indicating a network destination, and forwarding metadata including an accumulated delay metadata and a minimum latency for the transfer of the packet from the network sender to the network destination; determining, by the node, a minimum delay at the node for the packet based on the minimum latency and the accumulated delay metadata; and forwarding the packet at a time based on the minimum delay.
[0019] Optionally, in the preceding aspect, the method also includes: maintaining by the node of one or more ranked queues of packets for forwarding from the node.
Forwarding the packet at a time based on the minimum delay includes: determining, by the node, a queueing rank for the packet from the minimum delay; and entering the packet into one of the ranked queues based on the determined queueing rank for the packet.
[0020] Optionally, in any of the preceding aspects for the method of the second set of aspects, the method also includes: accessing, at the node, a number of hops from the node to the network destination. Determining the minimum delay at the node for the packet is further based on the number of hops.
[0021] Optionally, in any of the preceding aspects for the method of the second set of aspects, the method also includes: accessing, at the node, an estimated amount of time for fixed transfer times between the node and the network destination. Determining the minimum delay at the node for the packet is further based on the estimated amount of time for fixed transfer times between the node and the network destination.
[0022] Optionally, in any of the preceding aspects for the method of the second set of aspects, the forwarding metadata is a forwarding header.
[0023] Optionally, in the preceding aspect, the accumulated delay metadata includes an accumulated delay experienced by the packet since being transmitted by a network sender and the one or more processors, and the method further includes updating, by the node, the indicated accumulated delay experienced by the packet, wherein determining the minimum delay at the node for the packet is based on the minimum latency and the accumulated delay metadata.
[0024] Optionally, in the preceding aspect, the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination, and the method further comprises: discarding the packet if the updated indicated accumulated delay experienced by the packet exceeds the maximum latency.
[0025] Optionally, in the preceding aspect, the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination, and the method further comprises: discarding the packet in response to determining that the packet will exceed the maximum latency before reaching the network destination.
[0026] Optionally, in any of the preceding three aspects for the method of the second set of aspects, the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination. The method also includes: determining, by the node, a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet, and forwarding the packet at a time based on the minimum delay and the maximum delay.
[0027] Optionally, in the preceding aspect, the method also includes: maintaining by the node of one or more ranked queues of packets for forwarding from the node. Forwarding the packet at a time based on the minimum delay includes: determining, by the node, a queueing rank for the packet from the minimum delay and the maximum delay; and entering the packet into one of the ranked queues based on the determined queueing rank for the packet.
[0028] According to a further set of aspects of the present disclosure, a system for transmitting packets from a sending network device to a receiving network device includes one or more nodes connectable in series to transfer a packet from the sending network device to the receiving network device. Each of the nodes includes: a network interface configured to receive and forward the packet over the network, the packet including a network header, indicating the receiving network device, and a forwarding header, indicating an accumulated delay experienced by the packet since being transmitted by the sending network device and a minimum latency for the transfer of the packet from the sending network device to the receiving network device; and one or more processors coupled to the one or more ranked queues and the network interface. The one or more processors are configured to: receive the packet from the network interface; update the indicated accumulated delay experienced by the packet; determine a minimum delay at the node for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; and forward the packet at a time based on the minimum delay.
[0029] Optionally, in the preceding aspect, each of the one or more nodes further comprises: one or more ranked queues configured to store packets to forward over the network. The one or more are coupled to the one or more ranked queues and are further configured to: determine a queueing rank for the packet from the minimum
delay; and enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
[0030] Optionally, in any of the preceding further aspects for a system, for each of the one or more nodes, the one or more processors are further configured to: access a number of hops between the node to the receiving network device; and to further determine the minimum delay based on the number of hops.
[0031] Optionally, in any of the preceding further aspects for a system, for each of the one or more nodes, the one or more processors are further configured to: access an estimated amount of time for fixed transfer times between the node to the receiving network device; and to further determine the minimum delay based on the estimated amount of time for fixed transfer times between the node and the receiving network device.
[0032] Optionally, in any of the preceding further aspects for a system, the forwarding header further indicates a maximum latency for the transfer of the packet from the sending network device to the receiving network device. For each of the one or more nodes, the one or more processors are further configured to: determine a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet; and forward the packet at a time based on the minimum delay and the maximum delay.
[0033] Optionally, in the preceding aspect, each of the one or more nodes further comprises one or more ranked queues configured to store packets to forward over the network. The one or more are coupled to the one or more ranked queues and are further configured to: determine a queueing rank for the packet from the minimum delay and the maximum delay; and enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
[0034] Optionally, in any of the preceding further aspects for a system, the forwarding header further indicates a maximum latency for the transfer of the packet from the sending network device to the receiving network device. For each of the one or more nodes, the one or more processors are further configured to discard the packet if the updated indicated accumulated delay experienced by the packet exceeds the maximum latency.
[0035] Optionally, in any of the preceding further aspects for a system, one or more of the nodes is a router.
[0036] Optionally, in any of the preceding further aspects for a system, one or more of the nodes is a networking switch.
[0037] Optionally, in any of the preceding further aspects for a system, one or more of the nodes is a server.
[0038] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background.
BRIEF DESCRIPTION OF THE DRAWINGS
[0039] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate like elements.
[0040] FIGs. 1A-1 E illustrate the concept of latency-based forwarding of packets for service level objectives (SLO).
[0041] FIG. 2 illustrates an exemplary communication system for communicating data.
[0042] FIG. 3 illustrates an exemplary network of a series of nodes, such as routers, which may be included in one of the networks shown in FIG. 2.
[0043] FIG. 4 is flowchart for one embodiment for the latency based forwarding of packets as illustrated in the example FIG. 3.
[0044] FIG. 5 is a schematic diagram illustrating exemplary details of a network device, or node, such as shown in the network of FIG. 3.
[0045] FIG. 6 provides an example of a network in which latency based forwarding (LBF) of packets can be implemented.
[0046] FIG. 7 is a high level overview for an embodiment of end-to-end latency based forwarding.
[0047] FIG. 8 considers the node behavior for a pair of nodes from FIG. 7 in more detail.
DETAILED DESCRIPTION
[0048] The present disclosure will now be described with reference to the figures, which in general relate to methods and devices (e.g., routers) to manage latencies when transferring packets over networks. It is understood that the present embodiments of the disclosure may be implemented in many different forms and that claim scope should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the inventive embodiment concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present embodiments of the disclosure, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details.
[0049] High-precision networks demand high-precision service-level guarantees that can be characterized through a set of Service Level Objectives (SLOs), which are performance goals for a service under certain well-defined constraints. A delay, or latency, based SLO can indicate a specific end-to-end latency, given a certain not-to- exceed packet rate or bandwidth. (In the following,“delay” and“latency” are largely used interchangeably in terms of meaning, although in some cases these will be used to refer to differing quantities, such as when a“minimum latency” is used to indicate a lower bound of an end-to-end latency value while a“minimum delay” may be used to indicate an amount of time a packet is to spend at a particular node.) FIG. 1A illustrates the concept of a latency-based forwarding of packets over a network to meet a SLO, with several examples illustrated in FIGs. 1 B-1 E.
[0050] FIG. 1A illustrates a range of delays for the delivery of a packet over a network from a sending network device to a receiving network device. In FIG. 1 A, the SLO indicates a lower bound, or SLO lb, that is the earliest a packet is to reach its destination; an upper bound, or SLO ub, that is the latest a packet is to reach its destination; or a target latency window defined by both an SLO lb and an SLO ub. Together, these define an SLO range for the delivery of a packet over a network. Examples can include an upper bound (“end-to-end latency not to exceed”); a lower
bound (“end-to-end latency must be at least”, which is less common, but useful in certain scenarios); and special cases such as an“in-time guarantee” (lower bound = 0) or an“on-time guarantee” (lower bound = upper bound). Previous approaches do not allow for the specification of quantifiable latency SLOs that are provided by the network, where any upper bound will typically indicate “low latency” without quantification, and any minimum amount of latency or delay results from overloading of buffers at the egress nodes, rather than from a specified value.
[0051] FIG. 1 B illustrates an example of an in-time guarantee, which has no lower bound indicated (i.e. , lb=0) on latency, but has an indicated upper bound ub. Examples of applications where in-time guarantees can be of use are in Virtual Reality/Augmented Reality (VR/AR), which can have stringent limits on the maximum motion-to-photon time, such as to avoid dizziness and reduced quality of experience that can result from longer delays and may severely reduce user acceptance. Another example is for Tactile Internet having stringent limits to delay for haptic feedback, as a lack of sensation of being in control or sluggish control would make many applications infeasible. Further examples can include industrial controllers, that can have stringent limits to feedback control loops, and applications such as vehicle to everything (V2X), remote-controlled robots and drones, and similar cases.
[0052] FIG. 1 C illustrates an example of a minimum latency guarantee, which has no upper bound indicated (i.e., ub= ) on latency, but has an indicated lower bound lb. This corresponds to a best effort approach as far as an upper bound, but with the restriction that there is to be at least a minimum amount of indicated latency. This can useful in applications where the arrival of too much data too soon could overwhelm a node’s buffering capacities, introducing delays that affect other packets that need to arrive at their destinations within an upper bound or as soon as possible.
[0053] FIG. 1 D illustrates an example of an on-time guarantee, where lb==ub (or, generalizing, indicate a narrow range). On-time guarantees, which are stronger than in-time guarantees, can be used when application buffers cannot be assumed. On- time guarantees can provide fairness by not giving anyone an unfair advantage in multiparty applications and marketplaces, such as for trading or gaming (including those involving tactile internet). On-time guarantees can also be useful for synchronization in examples such as robot collaboration (e.g., lifting a packet by two
remotely controlled robots) or high-precision measurements (e.g., remote polling at exact intervals).
[0054] FIG. 1 E illustrates the “best effort” model, such as used in previous approaches. In the best effort case, there is no specification of either a lower bound or an upper bound and the network just delivers a packet as soon as it propagates through the network without the specification of a SLO. This can be contrasted to the specification of a SLO having an indicated lower bound, indicated upper bound, or both, as in the examples of FIG. 1 B and FIG. 1 D.
[0055] The techniques presented in the following discussion provide a system that delivers packets that traverse a network in accordance with a quantified delay SLO. The SLO indicates a delay range with quantifiable lower and upper bounds that can be varied for each individual packet. Previous networking technologies do not provide this capability, but are instead typically engineered to“minimize” delay by using a range of techniques ranging from dimensioning links to reserving resources and performing admission control functions. These previous approaches are not engineered to hit a specific quantified delay or target, and there is no networking algorithm that would hit that delay as part of a function of the network itself. Instead, the technology presented here provides the capability to do this without need for centralized coordination and control logic, but in a way that is performed“in-network”, thereby reducing controller dependence. The technology presented here further does so in a way that keeps the buffers of egress edge devices small (to reduce cost) and in a way that SLO is adhered to for a“first packet” (and does not require connection setup / handshake).
[0056] The embodiments presented here include a network with network nodes which perform a distributed algorithm that can deliver packets in accordance with a delay SLO with quantifiable lower and upper delay bounds. The distributed algorithm processes a packet on each node as it traverses the network following a local algorithm that: measures the delay that has been incurred by the packet since it was sent by the source; determines the remaining delay budget, based on SLO, delay, and prediction of downstream delay; and speeds up or slows down the packet per an action that best fits the budget. Possible actions include matching queue delay to action, and selecting from a set of downstream paths based on expected delays or buffering. Optionally, when a packet is beyond salvaging, it may be dropped or discarded.
[0057] FIG. 2 illustrates an exemplary communication system 100 with which embodiments of the present technology can be used. The communication system 100 includes, for example, user equipment 1 10A, 1 10B, and 1 10C, radio access networks (RANs) 120A and 120B, a core network 130, a public switched telephone network (PSTN) 140, the Internet 150, and other networks 160. Additional or alternative networks include private and public data-packet networks, including corporate intranets. While certain numbers of these components or elements are shown in the figure, any number of these components or elements may be included in the system 100.
[0058] In one embodiment, the communication system 100 can include a wireless network, which may be a fifth generation (5G) network including at least one 5G base station which employs orthogonal frequency-division multiplexing (OFDM) and/or non- OFDM and a transmission time interval (TTI) shorter than 1 milliseconds (e.g. 100 or 200 microseconds), to communicate with the communication devices. In general, a base station may also be used to refer to any of the eNB and the 5G BS (gNB). In addition, the network may further include a network server for processing information received from the communication devices via the at least one eNB or gNB.
[0059] System 100 enables multiple users to transmit and receive data and other content. The system 100 may implement one or more channel access methods, such as but not limited to code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), or single-carrier FDMA (SC-FDMA).
[0060] The user equipment (UE) 1 10A, 1 10B, and 1 10C, which can be referred to individually as an UE 1 10, or collectively as the UEs 1 10, are configured to operate and/or communicate in the system 100. For example, an UE 1 10 can be configured to transmit and/or receive wireless signals or wired signals. Each UE 1 10 represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device, wireless transmit/receive unit (UE), mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, wireless sensor, wearable devices, consumer electronics device, device-to-device (D2D) user equipment, machine type user equipment or user equipment capable of machine-to-machine (M2M) communication, iPads, Tablets, mobile terminals, laptop embedded equipped (LEE),
laptop mounted equipment (LME), USB dongles, or other non-limiting examples of user equipment or target device.
[0061] In the depicted embodiment, the RANs 120A, 120B include one or more base stations (BSs) 170A, 170B, respectively. The RANs 120A and 120B can be referred to individually as a RAN 120, or collectively as the RANs 120. Similarly, the base stations (BSs) 170A and 170B can be referred to individually as a base station (BS) 170, or collectively as the base stations (BSs) 170. Each of the BSs 170 is configured to wirelessly interface with one or more of the UEs 1 10 to enable access to the core network 130, the PSTN 140, the Internet 150, and/or the other networks 160. For example, the base stations (BSs) 170 may include one or more of several well-known devices, such as a base transceiver station (BTS), a Node-B (NodeB), an evolved NodeB (eNB), a next (fifth) generation (5G) NodeB (gNB), a Home NodeB, a Home eNodeB, a site controller, an access point (AP), or a wireless router, or a server, router, switch, or other processing entity with a wired or wireless network.
[0062] In one embodiment, the BS 170A forms part of the RAN 120A, which may include one or more other BSs 170, elements, and/or devices. Similarly, the BS 170B forms part of the RAN 120B, which may include one or more other BSs 170, elements, and/or devices. Each of the BSs 170 operates to transmit and/or receive wireless signals within a particular geographic region or area, sometimes referred to as a“cell.” In some embodiments, multiple-input multiple-output (MIMO) technology may be employed having multiple transceivers for each cell.
[0063] The BSs 170 communicate with one or more of the UEs 1 10 over one or more air interfaces (not shown) using wireless communication links. The air interfaces may utilize any suitable radio access technology.
[0064] It is contemplated that the system 100 may use multiple channel access functionality, including for example schemes in which the BSs 170 and UEs 1 10 are configured to implement the Long Term Evolution wireless communication standard (LTE), LTE Advanced (LTE-A), and/or LTE Multimedia Broadcast Multicast Service (MBMS). In other embodiments, the base stations 170 and user equipment 1 10A- 1 10C are configured to implement UMTS, HSPA, or HSPA+ standards and protocols. Of course, other multiple access schemes and wireless protocols may be utilized.
[0065] The RANs 120 are in communication with the core network 130 to provide the UEs 1 10 with voice, data, application, Voice over Internet Protocol (VoIP), or other
services. As appreciated, the RANs 120 and/or the core network 130 may be in direct or indirect communication with one or more other RANs (not shown). The core network 130 may also serve as a gateway access for other networks (such as PSTN 140, Internet 150, and other networks 160). In addition, some or all of the UEs 1 10 may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols.
[0066] The RANs 120 may also include millimeter and/or microwave access points (APs). The APs may be part of the BSs 170 or may be located remote from the BSs 170. The APs may include, but are not limited to, a connection point (an mmW CP) or a BS 170 capable of mmW communication (e.g., a mmW base station). The mmW APs may transmit and receive signals in a frequency range, for example, from 24 GHz to 100 GHz, but are not required to operate throughout this range. As used herein, the term base station is used to refer to a base station and/or a wireless access point.
[0067] Although FIG. 2 illustrates one example of a communication system, various changes may be made to FIG. 2. For example, the communication system 100 could include any number of user equipments, base stations, networks, or other components in any suitable configuration. It is also appreciated that the term user equipment may refer to any type of wireless device communicating with a radio network node in a cellular or mobile communication system.
[0068] The networks 130, 140, 150 and/or 160 will commonly transfer data as packets, in which network packets are formatted units of data carried by a packet mode computer networks. The embodiments presented below are primarily concerned with the transmission of such packets over networks and the management of latencies of such transmissions.
[0069] FIG. 3 illustrates an example network that includes networking devices 210a, 210b, 210c, 21 Od, and 21 Oe, which can be routers, networking switches, servers, or other networking devices. For example, networking device 210a could be a server sending packets, 210b, 210c, and 21 Od routers, and 21 Oe an edge device. To simplify the following discussion, these networking devices will often be referred to as nodes, but it will be understood that each of these can be various networking devices.
[0070] In the following, each of the nodes 210a, 210b, 210c, 21 Od, and 21 Oe can be referred to as a node 210, or which can be collectively referred to as the nodes
210. While only five nodes 210 are shown in FIG. 3, a network would likely include significantly more than four nodes 210. In much of the following discussion, the nodes will sometimes be referred to a routers, but will be understood to more generally be nodes. FIG. 3 also illustrates a network control/management plane 212 that is communicatively coupled to each of the routers or nodes 210 of the network, as represented in dashed lines. The control/management plane 212 can be used to perform a variety of control path and/or control plane functions. The control/management plane of a network is the part of the node architecture that is responsible for collecting and propagating the information that will be used later to forward incoming packets. Routing protocols and label distribution protocols are parts of the control plane.
[0071] FIG. 3 can be used to illustrate the management of latencies of packets as transmitted from a sending network device, or node, 210a over the intermediate nodes 21 Ob-21 Od to a receiving network device, or node, 21 Oe. FIG. 3 presents an example where the service level objective (SLO) indicates an end-to-end delay or latency with a lower bound of lb milliseconds and an upper bound of ub milliseconds and where lb and ub are both taken to be the same (an“on-time guarantee”) at 8ms. The total latency will be the sum of the fixed delays between each of the nodes and the local delays in each of the intermediate nodes. In the FIG. 3, the delay for a packet to travel between node 210a and node 210b is 1 ms and the delays between node 210b and node 210c, between node 210c and node 21 Od, and between node 21 Od and 21 Oe are all 500ps. The control/management plane 212 can notify each node of the number of remaining nodes, or hops, and the predicted propagation delay within each of the nodes. As described in more detail below, based upon this information and the amount of delay that the packet has experienced so far, the node can determine a local delay budget for the packet in the node.
[0072] Continuing with the example of FIG. 3, at node 210a the predicted propagation delay is (1 ms+500ps+500ps+500ps)=2.5ms and there are 4 hops to arrive at destination node 21 Oe, giving node 201 a a local latency budget of:
(8-2.5)ms/4=1.375ms.
The amount of propagation time from node 210a to 210b is 1 ms and the control/management plane 212 provides node 210b with a predicted propagation delay of (500ps+500ps+500ps)=1 5ms and 3 remaining nodes. Taking the allotted
8ms for the entire end-to-end delay, subtracting the delay so far (1 ms propagation delay, 1 .375ms latency budgeted to node 210a) and predicted additional delay (1.5ms), and then dividing by the number of remaining nodes (3) gives a local budget for latency at node 210b of:
(8-2.375-1 5)ms/3=1 375ms.
The node 210b can determine when to transmit the packet based on this budget. The latency budget is similarly determined for node 210c based upon 8ms total delay, a delay so far of (1 .375+1 +1 .375+.500)ms, predicted additional delay of 1 ms, giving a latency budget for node 210c of:
(8-4.25-1 )ms/2=1 .375ms.
For node 21 Od, the latency budget is similarly calculated as:
(8-6.125-0.5)ms/1 = 1 375ms.
With this budgeting, the packer arrives at node 21 Oe in (6.125+1 .375+0.5)ms=8.00ms, as wanted.
[0073] If the actual local delay or latency is not as predicted, the local latency budgets can be adjusted accordingly. For example, if there were 1 ms additional unexpected delay related to node 210b, either arising on node 210b itself or during propagation between node 210b and 210c, this loss can be taken out of local latency budgets of nodes 210c and 21 Od. Revising the calculation of the previous paragraph to add in this extra 1 ms delay, the local latency budget of 210c becomes:
(8-5.25-1 )ms/2=0.875ms.
The local latency budget of 21 Od becomes:
(8-7.125-0.5)ms/1 =0.875ms.
This again allows the packet to arrive at the designated lb==ub==8ms. As discussed in more detail below, when upper bound and lower bounds differ, both a minimum and a maximum lower latency budget are used:
Min-Local-latency-budget =
(lb - latency-in-packet - path-delay-to-destination)/number-hops-to-destination; and
Max-Local-latency-budget =
(ub - latency-in-packet - path-delay-to-destination)/number-hops-to-destination. In these expressions, “latency-in-packet” corresponds to the cumulative amount of delay or latency already experienced by the packet since leaving its source, and“path-
delay-to-destination” is the expected amount of fixed transmission delay before the packet reaches its destination node.
[0074] FIG. 4 is flowchart for one embodiment for the latency based forwarding of packets as illustrated in the example FIG. 3. Beginning at 301 , a packet is received at an intermediate node, such as one of nodes 210b, 210c, or 21 Od in FIG. 3. Once the packet is received at 301 , the packet’s service level objective (SLO) can be determined from the packet’s header 303 and the delay experienced by the packet so far is assessed at 305.
[0075] The SLO of a packet can differ between packets and can be maintained in a packet’s forwarding header or other forwarding metadata and is determined by the node at 303. The SLO can indicate one or both of an upper bound and a lower bound for the total latency, or delay, for the packet as it is transmitted from the sending node (210a in FIG. 3) to the receiving node (21 Oe in FIG. 3).
[0076] The packet can also carry information on the accumulated delay metadata, such as the amount of accumulated delay or latency experienced by the packet so far since it left the sending node. (In much of the following, latency and delay are used interchangeably in this context.) In 305, the node assesses the delay and can also update the delay before passing the packet on to the next node. In some embodiments, the accumulated delay metadata can be a timestamp, where the accumulated delay can be assessed based on the difference between the current time and packet’s starting time, where the packet can carry its sending time (as a timestamp) the delay can be obtained by subtracting the sent time from the received time. This embodiment uses network time synchronization, but can keep the packet contents unaltered. In other embodiments, as discussed in more detail below, the packet can be changed to update the cumulative latency, where this approach does not require the synchronization of time across the different nodes. In other alternative embodiments, rather than assessing the current delay, the node can instead update the remaining SLO.
[0077] At 307, based upon the input of the packet’s SLO (from 303) and delay (from 305), the node can determine the delay budget for the packet. As illustrated in FIG. 3, where the number of hops and delay prediction information is provided by the control/management plane 212, at 313 a path predictor can also supply path propagation and delay predictions, such as the number of hops to the destination and
the fixed delays between these hops. With respect to receiving the number of hops and fixed delays, a node can access this information in various ways in order to make better latency based forwarding decisions depending on the embodiment. Depending on the embodiment, this information can be stored/maintained on the node itself. In some embodiments, this information can be configured/provisioned using a control or management application. In other embodiments, it can be communicated using a control plane protocol such as IGP (internet gateway protocol). In general, this information can be communicated/received separately from the packet itself, and can involve a different mechanism. In one set of embodiments, a node can receive, or be aware, of this information by way of a forwarding information database (FIB) from where it is disseminated using a separate control plane mechanism (IGP, provisioning at the node via a controller, etc.). Consequently, the assessment of 307 can be based on the inputs of the remaining path information of: number of nodes remaining; the fixed delay for remainder of path, which can be computed from the number of links remaining with propagation delay and the possible number of nodes with fixed minimum processing delay; and receive information precomputed by the control/management plane 212 and disseminated along with path information by the control/management plane 212. The output of the assessment of 307 is the delay budget.
[0078] In one set of embodiments for the compute logic used by the node at 309, the fixed latencies and the current delay can be subtracted from the SLO, which can then be divided by the number of remaining nodes, as described above with respect to the local delay budgets of the nodes 210b, 210c, and 21 Od in FIG. 3. In some embodiments, the target latency or delay at the node can be based on the midpoint between lower bound and upper bound as determined from the packet’s SLO at 303. In other embodiments, for example, an adjustment step can be included, such as lowering the budget value to be closer to the lower bound if there are a large number of nodes remaining that could increase the likelihood of an unexpected delay. For example, the target delay for the node could be set as target = midpoint-(midpoint- lower bound) / Remaining nodes.
[0079] Based on the delay budget from 307, at 309 the node can take a quality of service (QoS) action. For example, the node can maintain one or more queues in which it places packets ready for forwarding and then select a queue and a placement
within the queue whose expected delay is the closest match for the packet’s target delay budget (e.g., the first queue whose delay is less than or equal to the target delay). The node can assess a queue’s latency as a function of queue occupancy, as well as other options, such as through the use of defined delay queues, for example. If the target delay budget is negative, a packet will miss its SLO. In case of a negative budget, depending on the embodiment the node could: discard or drop the packet; mark the packet as late, so that nodes downstream no longer need to prioritize the packet; or record an SLO violation in a statelet (e.g. update counter) of the packet. In other embodiments, the QoS action could include speeding up or slowing down a packet, or forwarding along a slower vs a faster path.
[0080] At 31 1 the packet is forwarded on the next node of its path. For example, after being entered into a selected queue at a location based on its delay budget at 308, the packet would work its way up the queue until it is transmitted over the network.
[0081] FIG. 5 is a schematic diagram illustrating exemplary details of a node 400, such as a router, switch, server or other network device, according to an embodiment. The node 400 can correspond to one of the nodes 210a, 210b, 210c, 21 Od, or 21 Oe of FIG. 3. The router or other network node 400 can be configured to implement or support embodiments of the present technology disclosed herein. The node 400 may comprise a number of receiving input/output (I/O) ports 410, a receiver 412 for receiving packets, a number of transmitting I/O ports 430 and a transmitter 432 for forwarding packets. Although shown separated into an input section and an output section in FIG. 5, often these will be I/O ports 410 and 430 that are used for both down stream and up-stream transfers and the receiver 412 and transmitter 432 will be transceivers. Together I/O ports 410, receiver 412, I/O ports 430, and transmitter 432 can be collectively referred to as a network interface that is configured to receive and transmit packets over a network.
[0082] The node 400 can also include a processor 420 that can be formed of one or more processing circuits and a memory or storage section 422. The storage 422 can be variously embodied based on available memory technologies and in this embodiment is shown to have a cache 424, which could be formed from a volatile RAM memory such as SRAM or DRAM, and long-term storage 426, which can be formed of non-volatile memory such as flash NAND memory or other memory technologies. Storage 422 can be used for storing both data and instructions for
implementing the packet forwarding techniques described here. Other elements on node 400 can include the programmable content forwarding plane 428 and the queues 450, which are explicitly shown and described in more detail below as they enter into the latency based packet forwarding methods developed in the following discussion. Depending on the embodiment, the programmable content forwarding plane 428 can be part of the more general processing elements of the processor 420 or a dedicated portion of the processing circuitry.
[0083] More specifically, the processor(s) 420, including the programmable content forwarding plane 428, can be configured to implement embodiments of the present technology described below. In accordance with certain embodiments, the memory 422 stores computer readable instructions that are executed by the processor(s) 420 to implement embodiments of the present technology. It would also be possible for embodiments of the present technology described below to be implemented, at least partially, using hardware logic components, such as, but not limited to, Field- programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc.
[0084] FIG. 6 provides an example of a network in which latency based forwarding of packets can be implemented. More specifically, FIG. 6 illustrates an aggregation ring, which is a common metropolitan broadband/mobile-access topology. In the example of FIG. 6, each of six ring routers (RA 501 , RB 503, RC 505, RD 507, RE 509, RF 51 1 ) is connected to 100 access nodes, spoke routers (RaO 501 -0 to Ra99 501 -99, RbO 503-0 to Rb99 503-99, RcO 505-0 to Rc99 505-99, RdO 507-0 to Rd99 507-99, ReO 509-0 to Re99 509-99, RfO 51 1 -0 to Rf99 51 1 -99). In FIG. 6, a packet is sent form the sending node Ra-0 501 -0 to the receiving node Re-0 509-0, traversing the ring nodes of routers RA 501 , RB 503, RC 505, RD 507, and RE 509.
[0085] Under prior queueing techniques for packets, at each of ring routers (RA 501 , RB 503, RC 505, RD 507, RE 509, RF 51 1 ) 99 packets, one for each other spoke router (e.g.: Ra1 501 -1 , ..., Ra99 501 -99) could arrive simultaneously (assuming all links have same speed). Without indicated minimum latencies for the packets, there is no mechanism for a router to establish which packets could be relatively delayed and no way to order the transmission of these packets. Consequenlty, some packets that could be delayed and still stay on budget may end up being queued in front of
more urgent packets. The latency based forwarding introduced here allows for a packet with a lower delay SLO to be queued in front of packets with a higher delay SLO. Under the latency based aproach, because each hop and queuing of prior hops reduces the acceptable per-hop delay, packets which have to cross more ring nodes would experience less per-hop delay in the queues than those travelling fewer hops. The latency based SLO can be tuned so that a network can provide fairer/more-equal delay across rings independently of how far away in the ring a sender and receivers are located. For example, minimum-delay can be set to be larger than the worst-case “across-ring” delay.
[0086] FIG. 7 is a high level overview for an embodiment of end-to-end latency based forwarding (LBF) 600. As shown in FIG. 7, latency based forwarding provides a machinery for an end-to-end network consisting of a sending network device, or sender, RS 601 , receiving network device, or receiver, RR 609 and one or more intermediate or forwarding nodes. In FIG. 7, three intermediate nodes RA 603, RB 605 and RC 607 are shown. In FIG. 7, the fixed latency for transmission between a pair of nodes is 1 ms, and each of the intermediate nodes adds a delay of an LBF queue latency. In this way the locally incurred latency at the node is added to the total delay incurred so far, so that it can be used by the subsequent node as one of the inputs to make its decision. As a result, the total end-to-end latency between the sending router or node RS 601 and the receiving router or node RR 609 is managed.
[0087] A packet 621 includes a destination header that indicates its destination, RR 609 in this example. This destination header is used by each forwarding node RA 603, RB 605, RC 607 to steer packet 621 to the next forwarding node or final receiver RR 609. In addition to the network header, for latency based forwarding packet 621 adds three parameters to a forwarding header: edelay, Imin and Imax. Although the present discussion is primarily based on an embodiment where this forwarding metadata is carried by a forwarding, or LBF, header, in alternate embodiments the forwarding, of LBF, metadata can be in a packet that can, for example, be coupled with a command in an internet protocol. The edelay parameter allows for each forwarding node (RA 603, RB 605, RC 607) to determine the difference in time (latency) between when the node receives the packet and when the sender RS 601 has sent the packet. In one set of embodiments, the edelay parameter is the latency, or delay, encountered so far. It is updated at each node, which adds the latency locally
incurred so far plus the known outgoing link latency to the next hop. In another set of embodiments, a sender timestamp is added once by the sender RS 601 , where subsequent nodes compute the latency (edelay) incurred so far by subtracting the sending timestamp from the current time. In the sender timestamp embodiments, the forwarding nodes RA 603, RB 605, RC 607 do not need to update the field, but this method does require a time-synchronized network. In further embodiments, a desired time of arrival could also be indicated. The parameters Imin and Imax are respectively an end-to-end minimum and maximum latency for the Service Level Objectives (SLOs). The latency with which the final receiving node RR 609 receives the packet is meant to be between the minimum and maximum latency values Imin and Imax.
[0088] FIG. 8 considers the node behavior for a pair of nodes from FIG. 7 in more detail. FIG. 8 illustrates two of the latency based forwarding nodes of FIG. 7, such as RA 603 and RB 605, and the resource manager 61 1 . Each of the nodes RA 603 and RB 605 includes a control plane 71 1 , 712 (such as based on an internet gateway protocol, IGP), a forwarding plane 731 ,732 and a latency based forwarding protocol queue or queues 741 ,742 in which the packets are placed for the next hop.
[0089] As shown in FIG. 8, embodiments of latency based forwarding can be very general in describing how forwarding nodes RA 603, RB 605, RC 607 can achieve this forwarding goal. A centralized resource manager 61 1 can provide control/policy/data to forwarding nodes RA 603, RB 605, RC 607 and/or a distributed mechanism. In some embodiments, the number of hops from the current node to the destination and/or the minimal latency to the destination can be accessed, such as by being communicated by a “control plane” 71 1 ,712 (e.g., a protocol such as IGP or provisioned through a controller, for example). In some embodiments, this information can be added to a forwarding information database, or FIB, along with other information such as the next hop. A forwarding plane 731 , 732 can be used to help steer the packet 621 on every forwarding node to the next-hop according to the packet’s destination parameter. With the LBF queue for the next hop 741 , 743, the packets will have updated edelay values 743, 744 that are provided to the forwarding plane of the next LBF node. A packet can be entered into the queue based on its delay budget. If the edelay value of a packet is over the maximum latency, the packet can be discarded. Depending on the embodiment, the LBF queue for the next hop
741 , 743 can be one or multiple queues and, for embodiments with multiple queues, the queues can be ranked or un-ranked.
[0090] A number of embodiments are possible for the latency based forwarding techniques presented here. Embodiments may or may not use path prediction and the determination of a forwarding delay per hop based on Imin, Imax, and edelay can be determined by differing algorithms.
[0091] Some embodiments can include the early discard of packets if the delay experienced so exceeds Imax for the packet. In embodiments where the physical propagation delay (i.e. , non-queueing delays due the time for a packet to propagate between the nodes of the hops on a path to the destination) of a packet is known, early discard at a node can be based on the sum of the experienced delay and remaining physical propagation delay exceeding Imax.
[0092] The latency based forwarding techniques can also be used to affect congestion delays in a system such as illustrated in FIG. 6. Under congestion, packets that have more time to reach their destination because of Imax and path prediction can be delayed longer than packets that need to be forwarded with less delay to make it in time for their Imax to their destination. For example, if a packet p1 from RA 501 with a relatively lower Imax has RE 509 as its destination and a packet p2 from RB 503 with a relatively higher Imax has RD 507 as its destination, based on their relative Imax values, path lengths, or both, the packet p1 can be propagated more rapidly at the intermediate nodes to reduce congestion and allow the packet p1 to move through the nodes of the network with less delay relative to p2, which can better afford to be delayed as it may have fewer hops to traverse, for example.
[0093] Under latency based forwarding, packets can be delayed hop by hop as the Imin value can affect the delay at a node even without any load on the network. Packets with higher Imin values are typically delayed in a node more than those with lower Imin values, so that packets should not arrive too early at further downstream nodes. For example, given two paths to a destination with the same predicted delay, but where one has more hops, then packets with the same Imin will see less delay per node in paths with more hops: When more hops are needed to forward a packet, each node can spend less time processing the packet.
[0094] In some embodiments, where congestion does result in packets being discarded, this can be based on the packets’ respective Imax values and path
predictions. When offered traffic for an output interface of a node exceeds its rate long enough that the node needs to discard packets, the one or more processors of the controller circuitry can preferentially discard those packets that which it is determined that they would not make to the corresponding network destination within the allotted Imax time because of the added congestion delay on this node (that is, the packets would not be discarded early on this node if there was no congestion).
[0095] Certain embodiments of the present technology described herein can be implemented using hardware, software, or a combination of both hardware and software. The software used is stored on one or more of the processor readable storage devices described above to program one or more of the processors to perform the functions described herein. The processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and non removable media. By way of example, and not limitation, computer readable media may comprise computer readable storage media and communication media. Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. A computer readable medium or media does not include propagated, modulated, or transitory signals.
[0096] Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term“modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
[0097] In alternative embodiments, some or all of the software can be replaced by dedicated hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application- specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc. In one embodiment, software (stored on a storage device) implementing one or more embodiments is used to program one or more processors. The one or more processors can be in communication with one or more computer readable media/ storage devices, peripherals and/or communication interfaces.
[0098] It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details.
[0099] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[00100] The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.
[00101] The disclosure has been described in conjunction with various embodiments. However, other variations and modifications to the disclosed embodiments can be understood and effected from a study of the drawings, the disclosure, and the appended claims, and such variations and modifications are to be interpreted as being encompassed by the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article“a” or “an” does not exclude a plurality.
[00102] For purposes of this document, it should be noted that the dimensions of the various features depicted in the figures may not necessarily be drawn to scale.
[00103] For purposes of this document, reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or“another embodiment” may be used to describe different embodiments or the same embodiment.
[00104] For purposes of this document, a connection may be a direct connection or an indirect connection (e.g., via one or more other parts). In some cases, when an element is referred to as being connected or coupled to another element, the element may be directly connected to the other element or indirectly connected to the other element via intervening elements. When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element. Two devices are“in communication” if they are directly or indirectly connected so that they can communicate electronic signals between them.
[00105] For purposes of this document, the term“based on” may be read as“based at least in part on.”
[00106] For purposes of this document, without additional context, use of numerical terms such as a“first” object, a“second” object, and a“third” object may not imply an
ordering of objects, but may instead be used for identification purposes to identify different objects.
[00107] The foregoing detailed description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the subject matter claimed herein to the precise form(s) disclosed. Many modifications and variations are possible in light of the above teachings. The described embodiments were chosen in order to best explain the principles of the disclosed technology and its practical application to thereby enable others skilled in the art to best utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope be defined by the claims appended hereto.
[00108] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims
1. A node for transferring packets over a network, comprising:
a network interface configured to receive and forward packets over the network; and
one or more processors coupled to the network interface, the one or more processors configured to:
receive, from the network interface, a packet, the packet including a network header, indicating a network destination, and forwarding metadata, the forwarding metadata indicating an accumulated delay metadata and a minimum latency for the transfer of the packet from the network sender to the network destination;
determine a minimum delay at the node for the packet based on the minimum latency and the accumulated delay metadata; and
forward the packet at a time based on the minimum delay.
2. The node of claim 1 , further comprising:
one or more ranked queues configured to store packets to forward over the network, wherein
the one or more processors are coupled to the one or more ranked queues and are further configured to:
determine a queueing rank for the packet from the minimum delay; and enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
3. The node of any of claims 1-2, wherein the accumulated delay metadata includes an accumulated delay experienced by the packet since being transmitted by a network sender and the one or more processors are further configured to:
update the indicated accumulated delay experienced by the packet; and determine a minimum delay at the node for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet.
4. The node of claim 3, wherein the forwarding metadata is a forwarding header.
5. The node of claim 4, wherein the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination, and the one or more processors are further configured to:
determine a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet; and
forward the packet at a time based on the minimum delay and the maximum delay.
6. The node of claim 5, further comprising:
one or more ranked queues configured to store packets to forward over the network, wherein
the one or more processors are coupled to the one or more ranked queues and are further configured to:
determine a queueing rank for the packet from the minimum delay and the maximum delay; and
enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
7. The node of any of claims 4-6, wherein the one or more processors are further configured to:
access a number of hops from the node to the network destination; and to further determine the minimum delay based on the number of hops.
8. The node of any of claims 4-7, wherein the one or more processors are further configured to:
access an estimated amount of time for fixed transfer times between the node and the network destination; and
to further determine the minimum delay based on the estimated amount of time for fixed transfer times between the node and the network destination.
9. The node of any of claims 4-8, wherein the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination, and the one or more processors are further configured to:
discard the packet if the updated indicated accumulated delay experienced by the packet exceeds the maximum latency.
10. The node of any of claims 4-9, wherein the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination, and the one or more processors are further configured to:
discard the packet in response to determining that the packet will exceed the maximum latency before reaching the network destination.
11. The node of any of claims 1 -10, wherein the accumulated delay metadata includes a timestamp.
12. The node of any of claims 1 -11 , wherein the node is a router.
13. The node of any of claims 1 -12, wherein the node is a server.
14. The node of any of claims 1 -13, wherein the node is a network switch.
15. A method of transferring a packet over a network, comprising:
receiving, at a node, a packet including a network header, indicating a network destination, and forwarding metadata indicating an accumulated delay metadata and a minimum latency for the transfer of the packet from the network sender to the network destination;
determining, by the node, a minimum delay at the node for the packet based on the minimum latency and the accumulated delay metadata; and
forwarding the packet at a time based on the minimum delay.
16. The method of claim 15, further comprising:
maintaining by the node of one or more ranked queues of packets for forwarding from the node, and
wherein forwarding the packet at a time based on the minimum delay includes: determining, by the node, a queueing rank for the packet from the minimum delay; and
entering the packet into one of the ranked queues based on the determined queueing rank for the packet.
17. The method of any of claims 15-16, further comprising:
accessing, at the node, a number of hops from the node to the network destination, and
wherein determining the minimum delay at the node for the packet is further based on the number of hops.
18. The method of any of claims 15-17, further comprising:
accessing, at the node, an estimated amount of time for fixed transfer times between the node and the network destination; and
wherein determining the minimum delay at the node for the packet is further based on the estimated amount of time for fixed transfer times between the node and the network destination.
19. The method of any of claims 15-18, wherein the forwarding metadata is a forwarding header.
20. The method of claim 19, wherein the accumulated delay metadata includes an accumulated delay experienced by the packet since being transmitted by a network sender and the one or more processors, the method further comprising:
updating, by the node, the indicated accumulated delay experienced by the packet, wherein determining the minimum delay at the node for the packet is based on the minimum latency and the accumulated delay metadata.
21. The method of claim 20, wherein the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination, the method further comprising:
discarding the packet if the updated indicated accumulated delay experienced by the packet exceeds the maximum latency.
22. The method of any of claims 20-21 , wherein the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination, the method further comprising:
discarding the packet in response to determining that the packet will exceed the maximum latency before reaching the network destination.
23. The method of any of claims 20-22, wherein the forwarding header further indicates a maximum latency for the transfer of the packet from the network sender to the network destination, the method further comprising:
determining, by the node, a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet, and
forwarding the packet at a time based on the minimum delay and the maximum delay.
24. The method of claim 23, further comprising:
maintaining by the node of one or more ranked queues of packets for forwarding from the node, and
wherein forwarding the packet at a time based on the minimum delay includes: determining, by the node, a queueing rank for the packet from the minimum delay and the maximum delay; and
entering the packet into one of the ranked queues based on the determined queueing rank for the packet.
25. A system for transmitting packets from a sending network device to a
receiving network device, comprising:
one or more nodes connectable in series to transfer a packet from the sending network device to the receiving network device, each of the nodes comprising:
a network interface configured to receive and forward the packet over the network, the packet including a network header, indicating the receiving network device, and a forwarding header, indicating an accumulated delay experienced by the packet since being transmitted by the sending network device and a minimum latency for the transfer of the packet from the sending network device to the receiving network device; and
one or more processors coupled to the one or more ranked queues and the network interface, the one or more processors configured to:
receive the packet from the network interface;
update the indicated accumulated delay experienced by the packet;
determine a minimum delay at the node for the packet based on the minimum latency and the updated indicated accumulated delay experienced by the packet; and
forward the packet at a time based on the minimum delay.
26. The system of claim 25, wherein each of the one or more nodes further comprises:
one or more ranked queues configured to store packets to forward over the network, and
wherein the one or more are coupled to the one or more ranked queues and are further configured to:
determine a queueing rank for the packet from the minimum delay; and
enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
27. The system of any of claims 25-26, wherein, for each of the one or more nodes, the one or more processors are further configured to:
access a number of hops between the node to the receiving
network device; and
to further determine the minimum delay based on the number of hops.
28. The system of any of claims 25-27, wherein, for each of the one or more nodes, the one or more processors are further configured to:
access an estimated amount of time for fixed transfer times between the node to the receiving network device; and
to further determine the minimum delay based on the estimated amount of time for fixed transfer times between the node and the receiving network device.
29. The system of any of claims 25-28, wherein the forwarding header further indicates a maximum latency for the transfer of the packet from the sending network device to the receiving network device, and, for each of the one or more nodes, the one or more processors are further configured to:
determine a maximum delay at the node for the packet based on the maximum latency and the updated indicated accumulated delay experienced by the packet; and
forward the packet at a time based on the minimum delay and the maximum delay.
30. The system of claim 29, wherein each of the one or more nodes further comprises:
one or more ranked queues configured to store packets to forward over the network, and
wherein the one or more are coupled to the one or more ranked queues and are further configured to:
determine a queueing rank for the packet from the minimum delay and the maximum delay; and
enter the packet into one of the ranked queues based on the determined queueing rank for the packet.
31. The system of any of claims 25-30, wherein the forwarding header further indicates a maximum latency for the transfer of the packet from the sending network device to the receiving network device, and
wherein, for each of the one or more nodes, the one or more processors are further configured to:
discard the packet if the updated indicated accumulated delay experienced by the packet exceeds the maximum latency.
32. The system of any of claims 25-31 , wherein one or more of the nodes is a router.
33. The system of any of claims 25-32, wherein one or more of the nodes is a networking switch.
34. The system of any of claims 25-33, wherein one or more of the nodes is a server.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962820350P | 2019-03-19 | 2019-03-19 | |
US62/820,350 | 2019-03-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020191013A1 true WO2020191013A1 (en) | 2020-09-24 |
Family
ID=70289460
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2020/023289 WO2020191014A1 (en) | 2019-03-19 | 2020-03-18 | Latency based forwarding of packets with dynamic priority destination policies |
PCT/US2020/023288 WO2020191013A1 (en) | 2019-03-19 | 2020-03-18 | Latency based forwarding of packets for service-level objectives (slo) with quantified delay ranges |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2020/023289 WO2020191014A1 (en) | 2019-03-19 | 2020-03-18 | Latency based forwarding of packets with dynamic priority destination policies |
Country Status (1)
Country | Link |
---|---|
WO (2) | WO2020191014A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023234816A1 (en) * | 2022-06-03 | 2023-12-07 | Telefonaktiebolaget Lm Ericsson (Publ) | Method for handling data communication by providing an indication of a required delivery time (dt) to a packet |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114793207A (en) * | 2021-01-26 | 2022-07-26 | 中国移动通信有限公司研究院 | Data processing method, device, network boundary device and distributed management device |
CN113726656B (en) * | 2021-08-09 | 2023-04-07 | 北京中电飞华通信有限公司 | Method and device for forwarding delay sensitive flow |
CN116436863A (en) * | 2022-01-04 | 2023-07-14 | 中兴通讯股份有限公司 | Message scheduling method, network device, storage medium and computer program product |
CN114650261A (en) * | 2022-02-24 | 2022-06-21 | 同济大学 | Reordering scheduling method in time-sensitive network queue |
US20230379264A1 (en) * | 2022-05-18 | 2023-11-23 | Electronics And Telecommunications Research Institute | Method and apparatus for on-time packet forwarding based on resource |
CN115022901B (en) * | 2022-05-30 | 2025-05-06 | 重庆邮电大学 | 5G-side service flow resource configuration method for 5G-TSN integration |
CN116192339B (en) * | 2023-04-26 | 2023-07-28 | 宏景科技股份有限公司 | Distributed internet of things data transmission method and system |
US20250208901A1 (en) * | 2023-12-26 | 2025-06-26 | Nokia Technologies Oy | Enforcement of end-to-end transaction latency |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130022042A1 (en) * | 2011-07-19 | 2013-01-24 | Cisco Technology, Inc. | Delay budget based forwarding in communication networks |
US20160285720A1 (en) * | 2013-11-13 | 2016-09-29 | Telefonaktiebolaget L M Ericsson (Publ) | Methods and Devices for Media Processing in Distributed Cloud |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7852763B2 (en) * | 2009-05-08 | 2010-12-14 | Bae Systems Information And Electronic Systems Integration Inc. | System and method for determining a transmission order for packets at a node in a wireless communication network |
US9609543B1 (en) * | 2014-09-30 | 2017-03-28 | Sprint Spectrum L.P. | Determining a transmission order of data packets in a wireless communication system |
US10560383B2 (en) * | 2016-11-10 | 2020-02-11 | Futurewei Technologies, Inc. | Network latency scheduling |
-
2020
- 2020-03-18 WO PCT/US2020/023289 patent/WO2020191014A1/en active Application Filing
- 2020-03-18 WO PCT/US2020/023288 patent/WO2020191013A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130022042A1 (en) * | 2011-07-19 | 2013-01-24 | Cisco Technology, Inc. | Delay budget based forwarding in communication networks |
US20160285720A1 (en) * | 2013-11-13 | 2016-09-29 | Telefonaktiebolaget L M Ericsson (Publ) | Methods and Devices for Media Processing in Distributed Cloud |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023234816A1 (en) * | 2022-06-03 | 2023-12-07 | Telefonaktiebolaget Lm Ericsson (Publ) | Method for handling data communication by providing an indication of a required delivery time (dt) to a packet |
Also Published As
Publication number | Publication date |
---|---|
WO2020191014A1 (en) | 2020-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020191013A1 (en) | Latency based forwarding of packets for service-level objectives (slo) with quantified delay ranges | |
US11362959B2 (en) | Latency based forwarding of packets with destination policies | |
US11432223B2 (en) | Methods and apparatuses for selecting a first base station or a second base station to transmit a packet data unit (PDU) to a user equipment (UE) | |
CA2940754C (en) | Network packet latency management | |
CN105474588B (en) | Adaptive traffic engineering configuration | |
US10849008B2 (en) | Processing method and device for radio bearer for transmitting data stream | |
CN103986653B (en) | Network nodes and data transmission method and system | |
CN107005431A (en) | The system and method for changing the data plane configuration based on service | |
JP2016541198A (en) | A framework for traffic engineering in software-defined networking | |
WO2018225039A1 (en) | Method for congestion control in a network | |
CN104412549A (en) | Network entity of a communication network | |
CN105490939B (en) | Method, system and computer medium for routing in a mobile wireless network | |
CN115812208A (en) | Method and system for Deep Reinforcement Learning (DRL) based scheduling in wireless systems | |
US11540164B2 (en) | Data packet prioritization for downlink transmission at sender level | |
WO2021119675A2 (en) | Guaranteed latency based forwarding | |
CN113726656A (en) | Method and device for forwarding delay sensitive flow | |
Ghaderi et al. | Flow-level stability of wireless networks: Separation of congestion control and scheduling | |
Xia et al. | Utility-optimal wireless routing in the presence of heavy tails | |
US20160183163A1 (en) | Control method, controller and packet processing method for software-defined network | |
CN111756557B (en) | Data transmission method and device | |
Hou et al. | Joint congestion control and scheduling in wireless networks with network coding | |
Zhou et al. | Managing background traffic in cellular networks | |
WO2022073583A1 (en) | Distributed traffic engineering at edge devices in a computer network | |
Orawiwattanakul et al. | Fair bandwidth allocation in optical burst switching networks | |
Zhao et al. | Maximizing the stable throughput of high-priority traffic for wireless cyber-physical systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20719265 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20719265 Country of ref document: EP Kind code of ref document: A1 |