WO2021174236A2 - In-band signaling for latency guarantee service (lgs) - Google Patents

In-band signaling for latency guarantee service (lgs) Download PDF

Info

Publication number
WO2021174236A2
WO2021174236A2 PCT/US2021/038276 US2021038276W WO2021174236A2 WO 2021174236 A2 WO2021174236 A2 WO 2021174236A2 US 2021038276 W US2021038276 W US 2021038276W WO 2021174236 A2 WO2021174236 A2 WO 2021174236A2
Authority
WO
WIPO (PCT)
Prior art keywords
resource reservation
reservation request
request packet
cir
lgs
Prior art date
Application number
PCT/US2021/038276
Other languages
French (fr)
Other versions
WO2021174236A8 (en
WO2021174236A3 (en
Inventor
Lijun Dong
Lin Han
Original Assignee
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurewei Technologies, Inc. filed Critical Futurewei Technologies, Inc.
Publication of WO2021174236A2 publication Critical patent/WO2021174236A2/en
Publication of WO2021174236A3 publication Critical patent/WO2021174236A3/en
Publication of WO2021174236A8 publication Critical patent/WO2021174236A8/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/805QOS or priority aware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/72Admission control; Resource allocation using reservation actions during connection setup
    • H04L47/724Admission control; Resource allocation using reservation actions during connection setup at intermediate nodes, e.g. resource reservation protocol [RSVP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6295Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority

Definitions

  • the present disclosure is generally related to network communications, and is specifically related to mechanisms for reserving network resources to provide a quality of service (QoS) and/or an LGS using data plane signaling.
  • QoS quality of service
  • LGS LGS using data plane signaling
  • Various mechanisms may be employed to reserve network resources to support fulfillment of QoS requirements.
  • Such mechanisms employ a control plane to allocate resources.
  • Such mechanisms may be complicated, and hence may not scale.
  • Such mechanisms may also be difficult to implement in a multi-network network domain environment, for example due to confidentiality and/or security concerns.
  • a control plane in anetwork domain may not share internal network resource configurations with a control plane in another network domain to for increased security.
  • control plane based resource reservation mechanisms may require a setup and teardown process that prevents usage for dynamic flows.
  • each control plane in each network domain may have a separate setup and teardown process that does not share resource information across network domain boundaries.
  • any one of the foregoing embodiments may be combined with any one or more of the other foregoing embodiments to create a new embodiment within the scope of the present disclosure.
  • FIG. 1 is a schematic diagram of an example network configured to reserve network resources for QoS, Committed Information Rate (CIR), and/or LGS in response to data plane signaling.
  • CIR Committed Information Rate
  • FIG. 2 is a protocol diagram of an example mechanism of employing data plane signaling to reserve network resources for QoS, CIR, and/or LGS.
  • FIG. 3 is a schematic diagram of an example network element.
  • FIG. 4 is a schematic diagram of an example network node configured to perform traffic classification.
  • FIG. 5 is a schematic diagram of an example New Internet Protocol (IP) packet that may be employed to perform data plane signaling to reserve network resources.
  • IP Internet Protocol
  • FIG. 6 illustrates an example of resource reservation request metadata that can be employed to request network resources be reserved via data plane signaling.
  • FIG. 7 illustrates an example of resource reservation response metadata that can be employed to communicate a result of a resource reservation request via data plane signaling.
  • FIG. 8 is a flowchart of an example method of requesting reservation of network resources at a source via data plane signaling.
  • FIG. 9 is a flowchart of an example method of performing resource reservation at a destination by employing data plane signaling.
  • FIG. 10 is a flowchart of an example method of performing resource reservation at an intermediate network node and/or a destination by employing data plane signaling.
  • FIG. 11 is a schematic diagram of an example system for performing resource reservation by employing data plane signaling.
  • in-band indicates the resource reservation occurs across the data plane, for example via a communication between a source and a destination, such as a client and a server, via a network of nodes/hops/network elements.
  • the source creates a QoS resource reservation request packet, such as an LGS request packet.
  • the source forwards the QoS resource reservation request packet toward the destination.
  • the QoS resource reservation request packet contains metadata describing the parameters of the service requirements requested for one or more corresponding flows.
  • the nodes along the path check the metadata, determine whether the flow can be admitted on the requested terms, allocate resources, update metadata, and/or forward the QoS resource reservation request packet toward the destination.
  • the destination receives the QoS resource reservation request packet, determines whether the resource reservation was successful, and sends a QoS resource reservation response packet, such as an LGS response packet back to the source on the reverse path.
  • the nodes can release any allocated resources when resource reservation is not successful. Alternatively, the nodes can wait to formally allocate resources until the reservation response packet indicates the resource reservation/flow admission is successful for the entire path.
  • the source can then try again using data from the resource reservation response packet.
  • the source can request resources that are known to be available based on the resource reservation response packet.
  • the request packet can contain metadata indicating a latency deadline and a total maximum latency accumulated along the path.
  • the request packet can contain metadata indicating a requested Committed Information Rate (CIR) and an allowed CIR indicating the minimum allowed CIR among hops along the path.
  • CIR Committed Information Rate
  • the request packet can also contain metadata indicating whether each hop along the path has admitted the flow. The destination may use such data to set a flow priority, determine whether the setup is successful, and/or determine a reverse path.
  • the flow priority, setup success, and/or reverse path can then be added to the QoS resource reservation response packet.
  • the destination can include a recommended deadline into the response packet based on the total maximum latency in the request packet.
  • the destination can include a recommended CIR into the response packet based on the allowed CIR from the request packet.
  • FIG. 1 is a schematic diagram of an example network 100 configured to reserve network resources for QoS, CIR, and/or LGS in response to data plane signaling.
  • the network 100 includes clients 101, servers 107, and transit nodes 105.
  • a client 101 is any device that requests communication of data from a server 107.
  • a server 107 is any device that serves data to a client 101.
  • a server 107 may maintain data and/or may be configured to process data provided by a client 101.
  • a client 101 may contact a server 107 to request data and/or to provide data to be processed.
  • the server 107 processes the client’s 101 request and provides the requested data to the requesting client 101.
  • the server 107 may also contact other clients and/or servers 107 at the requesting client’s 101 request.
  • the data is passed between the server(s) 107 and the client(s) 101 in flows 102.
  • a client 101 may be a computer, television, tablet, smart phone, or other internet connected media device and the server 107 may be a computer system, cloud based virtual machine, or other computing device configured to serve flows 102 of video data for viewing by the client 101.
  • the client 107 is a computing device capable of connecting to an audio and/or video teleconference and the sever 107 is a computing device configured to manage the teleconference and connect the client 101 to other clients 101.
  • the network 100 further comprises various transit nodes 105.
  • the transit nodes 105 are configured to connect the clients 101 and the servers 107.
  • a transit node 105 is any communication device configured to receive data on a first set of interface(s) and communicate such data over a second set of interface(s).
  • a transit node 105 may be implemented as a repeater, a switch, a router, or other network communication device.
  • the transit nodes 105 can be connected in any configuration.
  • the transit nodes 105 may form a network domain and/or may be spread across multiple network domains.
  • the transit nodes 105 communicate requests and/or replies between the clients 101 and servers 107 to setup communication sessions.
  • the transit nodes 105 also forward data flows 102 between the clients 101 and servers 107.
  • a flow 102 is related sequence of packets between two or more nodes in a network 100.
  • the network 100 includes a source 103 and a destination 109.
  • the source 103 is any device that initiates a communication with the destination 109 to setup a flow 102 between the source 103 and the destination 109.
  • the source 103 may be a client 101 and the destination 109 may be a server 107, or vice versa. Further, the source 103 may be the flow source or the flow destination. Likewise, the destination 109 may be the flow destination or the flow source.
  • Source 103 simply denotes the device that determines to reserve resources across the network. Destination 109 denotes the device that acts as an end point for such a resource reservation.
  • a source communicates with a control plane to request that resources be reserved between two end points.
  • the control plane determines a path between the end points and allocates such resources along the path.
  • the control plane in a first domain allocates resources across the first domain, signals the allocation to the affected nodes, and contacts a second domain.
  • the control plane in the second domain then allocates resources across the second domain, signals the allocation to the affected nodes, and contacts a third domain, etc.
  • Such an approach is time consuming and only works well for flows that are very predictable and/or that are setup in advance.
  • such a system utilizes a large amount of signaling resources just to setup and tear down resource allocations for each flow. Further, such an approach suffers degradations as a network is scaled to include many active users. For example, when many sources constantly start and stop flows, the control plane can become overwhelmed by a signaling storm.
  • control plane is the portion of the network 100 that is configured to maintain network topology and determine path routing. Accordingly, the control plane generally configures the data plane.
  • data plane is the portion of the network 100 that is configured to forward packets.
  • the disclosed system avoids such complexity by setting up quality of service (QoS) resource allocations across the network via data plane signaling (and not by using control plane signaling).
  • the source 103 can transmit a QoS resource reservation request packet toward the destination 109 via the transit nodes 105.
  • the QoS resource reservation request packet can indicate to the destination 109 and each transit node 105 on the path between the source 103 and the destination 109 that the source 103 is requesting network resources be allocated for use by one or more flows 102.
  • the QoS resource reservation request packet can contain metadata indicating relevant parameters related to the resource reservation, such as the type and amount of resources to be allocated.
  • Each transit node 105 can determine whether enough resources are available to meet the request and hence whether the flow 102 can be admitted at the transit node 105.
  • the transit node 105 then admits or denies the flow and indicates the decision in metadata in the QoS resource reservation request packet.
  • the transit node 105 allocates relevant resources when the flow 102 is admitted.
  • the transit node 105 then forwards the QoS resource reservation request packet toward the destination 109, for example via additional transit nodes 105.
  • the destination 109 can receive the QoS resource reservation request packet and determine whether the flow 102 has been admitted to each transit node 105 along the path. The destination 109 can then generate and send a QoS resource reservation response packet back to the source 103 along a reverse path.
  • the QoS resource reservation response packet contains metadata indicating whether flow 102 admission was successful or contains an indication of why admission was not successful along with suggested parameters for a subsequent QoS resource reservation request packet.
  • the transit nodes 105 can release such allocated resources when the QoS resource reservation response packet indicates that flow 102 admission was not completely successful for the entire path. In other examples, the transit nodes 105 can wait and allocate resources only upon receiving a QoS resource reservation response packet indicating that flow 102 admission is successful for the entire path.
  • the source 103 can then begin transmitting or receiving the flow 102 upon receiving a QoS resource reservation response packet indicating a successful admission of the flow 102 along the path. Further, the source 103 can use parameters from the QoS resource reservation response packet to set parameters in another QoS resource reservation request packet when admission of the flow 102 was not successful. The source 103 can continue sending QoS resource reservation request packets until admission is successful or until the source determines that resource reservation is not possible, for example due to timeouts and/or poor results indicated by parameter values in a QoS resource reservation response packet.
  • the present mechanism can employ metadata to perform several different types of resource reservation requests.
  • the request packet can request latency guaranteed service (LGS) where the resource allocation guarantees a specified latency for all packets in the flow 102 between the source 103 and the destination 109.
  • the request packet can request a committed information rate (CIR) where each transit node 105 allocates sufficient resources to the flow 102 to guarantee that flow 102 packets and processed and forwarded at a minimum data rate (e.g., bandwidth allocation).
  • CIR committed information rate
  • the present mechanism can be employed to set a priority level for handling the flow(s) 102.
  • Other resource allocation schemes can also be accomplished with this mechanism.
  • a jumbo internet protocol (IP) packet such as a packet containing IP version six (IPv6) extension headers and/or a packet containing IP version four (IPv4) options, can be employed to contain the relevant metadata.
  • IPv6 IP version six
  • IPv4 IP version four
  • a New IP packet can be used to contain the relevant metadata.
  • New IP is an emerging framework that may be leveraged to transport metadata along with the packet through the data plane via a New IP Metadata field. New IP packets have no maximum size.
  • New IP packets employ contract clause(s).
  • a contract clause is a conditional directive that indicates to transit nodes 105 that an action, which may be defined in the packet, should be taken when a condition defined in the contract clause occurs. This allows for control functionality to be implemented in data plane signaling. New IP is discussed in greater detail below.
  • the present mechanism improves functionality of the network 100, the source 103, the destination 109, and related transit nodes 105.
  • the present mechanism provides a simplified control mechanism for supporting flow level QoS based on a data flow loop instead of complicated control protocols such as resource reservation protocol (RSVP).
  • RSVP resource reservation protocol
  • the present mechanism can also coexist with other protocols, such as congestion control and operations administration and management (OAM) protocols.
  • OAM congestion control and operations administration and management
  • advance network features such as path protection, non-shortest path, etc., can be enhanced independently while employing the present mechanism.
  • the present mechanism is agnostic of upper layer protocols and is not impacted by user data encryption and/or IP security (IPsec).
  • IPsec IP security
  • the present mechanism is also backwards compatible and can coexist with other services.
  • the present mechanism can simplify and increase the efficiency of admission control and dynamic resource management. Further, the present mechanism increases the scalability and performance of the network 100. For example, signaling storm does not occur since the present mechanism does not rely on a control protocol. The signaling follows the data stream, and hence is end to end signaling and can cross domains. Signaling can occur once and/or periodically. Data flow functions can also be managed as part of path refreshment. Also, resources can be set to be automatically released upon relevant expiration periods and/or conditions. As such, the present mechanisms allow for near real time resource reservation via the data plane in a way that is scalable and may cross network domain boundaries.
  • the described mechanisms may reduce processor, memory, and/or network resource usage at the source 103, destination 109, and/or intermediate transit nodes 105 by avoiding complex setup and tear-down protocols employed by a control plane.
  • the described examples solve various problems specific to network communications and increase the functionality and/or operability of network communication devices.
  • FIG. 2 is a protocol diagram of an example mechanism 200 of employing data plane signaling to reserve network resources for QoS, CIR, and/or LGS, for example between a source 103 and a destination 109 via transit nodes 105 in network 100.
  • QoS is a description or measurement of a guarantee of performance of a service by a network.
  • QoS applies to a broad range of network resource allocation mechanisms.
  • LGS is a guarantee of a maximum latency between a source and a destination for all packets in a flow.
  • CIR is a guarantee of a minimum rate of data transfer (e. g. , bandwidth) by nodes along a path.
  • Mechanism 200 can cause the allocation of resources to support many QoS related technologies, such as CIR and LGS.
  • the source requests a resource allocation along a path between the source and the destination.
  • the source generates a QoS resource reservation request packet.
  • the QoS resource reservation request packet may be implemented as a New IP packet or a jumbo IP packet.
  • a QoS resource reservation request packet is any packet that request resource reservation to support QoS.
  • the QoS resource reservation request packet may be a CIR request packet and/or an LGS request packet.
  • the QoS resource reservation request packet can contain metadata indicating a deadline containing a requested maximum latency and a field for total maximum latency accumulated along the path to support LGS.
  • the QoS resource reservation request packet may contain a requested CIR as well as an allowed CIR field indicating the minimum CIR that all intermediate nodes can allocate to the flow.
  • the total maximum latency and allowed CIR may be set to default values at the source and updated by the transit nodes.
  • the QoS resource reservation request packet is forwarded to the first node along the path.
  • the first node determines whether to admit the flow(s) indicated in the QoS resource reservation request packet. For example, the first node sets the total maximum latency to the latency added by the hop between the source and the first node and compares the total maximum latency to the deadline.
  • the first node may update an admissible bit in the metadata in the QoS resource reservation request packet to indicate that the flow is admitted in the first node.
  • the first node may update the admissible bit to indicate that the flow is not admitted at the first node. Further, at this point the first node optionally can also remove the deadline from the metadata of the packet, because that deadline can no longer be met.
  • the first node compares the amount of bandwidth available at the first node (e.g., ingress and egress rates) to the requested CIR.
  • the first node may update the admissible bit to indicate that the flow is admitted at the first node.
  • the first node may update the admissible bit to indicate that the flow is not admitted at the first node.
  • the first node may also compare the allowed CIR in the metadata to the amount of available bandwidth at the first node.
  • the first node may set the allowed CIR in the metadata to the available bandwidth at the first node.
  • This allowed CIR in the metadata can optionally replace the requested CIR in the packet, because that requested CIR can no longer me provided.
  • the admissible bit indicating that the flow is not admitted can also indicate that the metadata indicates the hypothetically allowed CIR, instead of the originally requested CIR.
  • the first node can optionally only update the total maximum latency and/or the CIR according to conditions at the first node, without use of an admissible bit.
  • the final destination can determine admissibility based on the final latency/CIR compared to the requested latency/CIR, when all values are carried in the metadata.
  • the mechanism 200 employs both CIR and LGS.
  • An example algorithm for employing both CIR and LGS is as follows. An LGS flow is admissible when the following three conditions are satisfied. The first condition is that, for the router ingress, the requested CIR in the metadata should satisfy: CIR ⁇ Ri ngress ⁇ where CIR is the requested CIR, Rin g r ess is the packet ingress rate, and CIR[ GS is the aggregated CIR of other previously admitted LGS flows.
  • the second condition is that, for the router egress, the requested CIR in the metadata should satisfy: C ⁇ where CIR is the requested CIR, is the total allowable rate for all LGS flows (as dictated by hardware constraints), and the aggregated CIR of other previously admitted LGS flows.
  • the third condition is that adding the maximum latency generated by the current node to the total latency cannot result in a value that exceeds the requested latency deadline. [0039] Based on the forgoing, the node compares the total latency to the requested latency deadline. When the total latency is smaller than the deadline and adding the requested CIR does not exceed the configured total ingress or egress rates, then the admissible bit for the current hop is set to one (admitted) and the requested CIR is recorded at the node.
  • the residual ingress and egress rates at the node are reduced by the requested CIR (indicating allocation for the flow).
  • the admissible bit for the current hop is set to zero (not admissible because latency cannot be guaranteed).
  • the admissible bit for the current hop is set to zero (not admitted) because CIR cannot be guaranteed.
  • the total allowable rate metadata is set to be the lowest CIR allowed for the current hop and all previous hops (if any).
  • the first node processes the QoS resource reservation request packet, allocates resources, and/or updates metadata at step 203
  • the first node transmits the QoS resource reservation request packet to the second node at step 204.
  • the second node performs substantially the same process as the first node at step 203. Specifically, the second node determines whether to admit the flow based on the requested parameters. The second node then allocates resources and/or updates the metadata to indicate the results. For example, the second node updates a second admissible bit to indicate whether the flow is admitted at the second node. The second node also updates total allowable rate when bandwidth at the second node does not meet the requested CIR and allocable bandwidth at the second node is less than the value currently in the total allowable rate metadata.
  • the second node then sends the QoS resource reservation request packet toward the third node at step 206.
  • the third node and fourth node process the QoS resource reservation request packet in the same manner as the first and second node. Accordingly, step 207 and step 209 are substantially similar to step 203 and 205. Further, step 208 and step 210 are substantially similar to steps 202, 204, and 206.
  • the destination receives the QoS resource reservation request packet from the fourth node as a result of step 210. The destination then processes the QoS resource reservation request packet and generates a QoS resource reservation response packet at step 211.
  • the destination can determine whether the flow was admitted by all nodes along the path (or similarly, whether all nodes can admit the flow).
  • the destination can then add a successful (S) bit to the metadata in the QoS resource reservation response packet.
  • S bit can be set to one to indicate the flow setup was a success at all nodes along the path or zero to indicate the flow setup was not successful at one or more nodes along the path.
  • the destination may assign a priority to the flow and indicate the priority in the metadata in the QoS resource reservation response packet.
  • the destination may also include a reverse path in the metadata of the QoS resource reservation response packet to ensure that the QoS resource reservation response packet traverses the same path as the QoS resource reservation request packet.
  • the destination may include feedback to the source as metadata in the QoS resource reservation request packet.
  • the destination can use the metadata from the QoS resource reservation request packet to determine parameters that are likely to result in an admitted flow and can then include such parameters in the metadata of the QoS resource reservation response packet.
  • the destination can include a recommended deadline based on the total latency from the QoS resource reservation request packet.
  • the destination can include a recommended CIR based on the allowed CIR in the QoS resource reservation request packet.
  • the QoS resource reservation response packet includes an indication of the parameters that are likely to result in a successful flow allocation in a further round of resource reservation.
  • the QoS resource reservation request packet includes a contract clause that directs the nodes to allocate resources when the flow is admitted in response to the QoS resource reservation request packet.
  • the QoS resource reservation response packet includes a contract clause that directs the nodes to release such allocations when the flow admission is not successful for the entire path.
  • the resources are not allocated by the QoS resource reservation request packet.
  • the QoS resource reservation response packet includes a contract clause that directs the nodes to allocate such resources when the flow admission is successful for the entire path.
  • the QoS resource reservation response packet is forwarded along the reverse path at step 212. Specifically, the QoS resource reservation response packet is transmitted to the fourth node, the third node, the second node, the first node, and the source at step 212, step 214, step 216, step 218, and step 222, respectively. Further, the QoS resource reservation response packet is processed at each node at step 213, step 215, step 217, and step 219. Specifically, the nodes either allocate resources based on the metadata, release resources based on the metadata, or make no change to the allocation based on the metadata, depending on the example.
  • the source receives the QoS resource reservation response packet and determines whether flow setup was successful, for example based on the S bit.
  • the source can employ the metadata in the QoS resource reservation response packet to either begin sending or receiving the flow, depending on the example.
  • the source can generate a second QoS resource reservation request packet based on the metadata from the QoS resource reservation response packet. For example, the source can set the requested CIR in the second QoS resource reservation request packet based on the recommended CIR in the QoS resource reservation response packet. As another example, the source can set the requested latency deadline in the second QoS resource reservation request packet based on the recommended deadline in the QoS resource reservation response packet.
  • the source can then forward the second QoS resource reservation request packet to the first node at step 224.
  • the second QoS resource reservation request packet is then handled in a substantially similar manner to the first QoS resource reservation request packet. Accordingly, the source can continue to request flow admission using various parameters until the flow is admitted. Alternatively, the source can determine that resource allocation is not possible and either wait for the resources to become available or rely on best efforts communications for the flow. Accordingly, by employing mechanism 200, network resources along a path can be requested and/or allocated based on data plane signaling without relying on control plane signaling.
  • FIG. 3 is a schematic diagram of an example network element 300.
  • the network element 300 is suitable for implementing the disclosed examples/embodiments as described herein.
  • the network element 300 comprises downstream ports 320, upstream ports 350, and/or transceiver units (Tx/Rx) 310, including transmitters and/or receivers for communicating data upstream and/or downstream over a network.
  • the network element 300 also includes a processor 330 including a logic unit and/or central processing unit (CPU) to process the data and a memory 332 for storing the data.
  • CPU central processing unit
  • the network element 300 may also comprise electrical, optical-to- electrical (OE) components, electrical-to-optical (EO) components, and/or wireless communication components coupled to the upstream ports 350 and/or downstream ports 320 for communication of data via electrical, optical, or wireless communication networks.
  • the network element 300 may also include input and/or output (I/O) devices for communicating data to and from a user.
  • the I/O devices may include output devices such as a display for displaying image and/or video data.
  • the I/O devices may also include input devices, such as a keyboard, mouse, trackball, etc., and/or corresponding interfaces for interacting with such output devices.
  • the processor 330 is implemented by hardware and software.
  • the processor 330 may be implemented as one or more CPU chips, cores (e.g., as a multi-core processor), field- programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and digital signal processors (DSPs).
  • the processor 330 is in communication with the downstream ports 320, Tx/Rx 310, upstream ports 350, and memory 332.
  • the processor 330 comprises a reservation module 314.
  • the reservation module 314 implements the disclosed embodiments described herein, such as mechanism 200, method 800, method 900, and/or method 1000, which may employ a packet, such as a New IP packet 500, including resource reservation request metadata 600 and/or resource reservation response metadata 700.
  • the PNG module 714 may also implement any other method/mechanism described herein. Further, the reservation module 314 may implement functionality in a device in network 100, such as in a client 101, a source 103, a node 105, a server 107, a destination 109, and/or a network node 400. Further, the reservation module 314 may implement a device in system 1100. For example, the reservation module 314 may employ in-band signaling to request a QoS based resource reservation, process a QoS based resource reservation request, and/or reply to a QoS based resource reservation request.
  • a reservation module 314 on a source may employ an LGS request packet to communicate service requirement parameters for one or more flows as metadata
  • the reservation module 314 on a transit node may admit flows based on such metadata
  • the reservation module 314 may determine whether the resource reservation was successful and send an LGS response packet back to the source on the reverse path to confirm or deny the resource reservation and/or communicate recommendations via relevant metadata.
  • reservation module 314 causes the network element 300 to provide additional functionality, such as reserving network resources via data plane signaling without employing complex control plane protocols.
  • the reservation module 314 allows for fast, dynamic, responsive, and scalable network resource reservations for network flows.
  • the reservation module 314 improves the functionality of the network element 300 as well as addresses problems that are specific to the image coding arts. Further, the reservation module 314 affects a transformation of the network element 300 to a different state.
  • the reservation module 314 can be implemented as instructions stored in the memory 332 and executed by the processor 330 (e.g., as a computer program product stored on a non-transitory medium).
  • the memory 332 comprises one or more memory types such as disks, tape drives, solid-state drives, read only memory (ROM), random access memory (RAM), flash memory, ternary content-addressable memory (TCAM), static random-access memory (SRAM), etc.
  • the memory 332 may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution.
  • FIG. 4 is a schematic diagram of an example network node 400 configured to perform traffic classification.
  • the network node 400 may be implemented in a transit node 105 and/or a network element 300. Further, the network node 400 may be employed to implement the functionality of a node in mechanism 200.
  • the network node 400 is configured to implement a differentiated services (Diffserv) network architecture that is expanded to support LGS traffic and bandwidth guarantee service (BGS) where LGS traffic receives guaranteed latency and where BGS traffic receives a guaranteed CIR and hence a guaranteed bandwidth.
  • Diffserv differentiated services
  • BGS bandwidth guarantee service
  • the network node 400 includes ingress ports 411 that receive upstream traffic at a rate of R ingress -
  • the ingress ports 411 are an upstream interface between link(s) connected to the node 400 and the node’s 400 traffic processing hardware.
  • the network node 400 also includes a traffic classifier 413 and priority queues 415.
  • a traffic classifier 413 is a software and/or hardware component configured to sort traffic from the ingress ports 411 into traffic classes and assign such traffic into the priority queues 415 based on traffic class.
  • the traffic can be sorted into traffic classes based on rules stored at the node 400 and/or information contained in the traffic packets.
  • the priority queues 415 are implemented in memory and store the packets until the packets can be transmitted downstream.
  • the network node 400 also includes a scheduler 417.
  • the scheduler 417 is a software and/or hardware component configured to determine the order that packets in the priority queues 415 are transmitted downstream.
  • the scheduler 417 prioritizes transmission of higher priority traffic over lower priority traffic based on the priority queues 415.
  • the network node 400 also includes egress ports 419 that transmits traffic downstream at a rate of R egress .
  • the egress ports 419 are a downstream interface between link(s) connected to the node 400 and the node’s 400 traffic processing hardware. It should also be noted that upstream indicates the direction of a source of relevant traffic and downstream indicates the direction of a destination of relevant traffic.
  • the Diffserv architecture uses a six bit differentiated services code point (DSCP) in the header of an IP packet to indicate the classification of the corresponding packet.
  • the traffic classifier 413 may assign traffic to the priority queues 415 based on the DSCP in each packet.
  • the priority queues 415 include an expedited forwarding (EF) queue, a series of assured forwarding (AF) queues, and a best efforts (BE) queue.
  • EF expedited forwarding
  • AF assured forwarding
  • BE best efforts
  • the EF queue is a low delay, low loss, and low jitter priority and is generally used for voice, video, or other real-time data communication services.
  • the AF queues assure delivery of packets as long as the traffic does not exceed a rate that corresponds to an AF queue number.
  • the BE queue is the lower priority queue.
  • data in the BE queue is the most likely to be dropped in case of congestion.
  • the data in the BE queue is generally transmitted when all higher priority queues are consistently meeting the guarantees for their traffic.
  • BE traffic can be held to ensure higher priority traffic guarantees can be met.
  • the network node 400 is also expanded to provide an LGS queue to prioritize LGS traffic and a BGS queue to prioritize CIR traffic.
  • the LGS queue is lower priority than the EF queue, but higher priority than all other queues.
  • the BGS queue is lower priority than the EF queue and the LGS queue, but higher priority than the AF queues and the BE queue.
  • the following describes the functionality of the LGS queue and the BGS queue as well as network node’s 400 management of associated traffic.
  • LGS denotes a traffic class that provides accurate latency guarantee service.
  • BGS denotes a traffic class that provides a deterministic bandwidth guarantee.
  • traffic classifier 413 may take two different approaches for queueing LGS and BGS traffic.
  • the traffic classifier 413 categorizes the LGS and BGS traffic into queues for different classes.
  • the scheduler 417 may employ a hybrid scheduling approach. For example, the scheduler 417 may use a Strict Priority Queue (SPQ) for the EF queue and the LGS queue.
  • SPQ Strict Priority Queue
  • DWRR Deficit Weighted Round Robin
  • the maximum latency for an LGS flow can be calculated as follows: [0055] denotes the number of LGS packets that have arrived at a node at a specified instant while transmission is ongoing across an ingress link. L max is the maximum packet size in bits. is calculated in Equation (1) as the total maximum size of packets of the current LGS flow divided by the egress rate R egress . is calculated as shown in Equation (2). is the aggregated ingress flow rate of the current presenting LGS flows, which can be approximated as the aggregated CIR.
  • a constant burst coefficient r represents the deviation of the real flow rate from the given CIRs of the already admitted LGS flows CIR LGS and the current LGS flow (total n number of LGS flows).
  • r can be one. could be set to be to compute the upper bound of latency for LGS flows, where is the total allowed rate for all LGS flows for the ingress interface, which may be pre- configured by the router administrator and cannot exceed the rate of the ingress interface
  • a network node 400 When a network node 400 receives a QoS resource reservation request packet, such as an LGS setup packet with a proposed contract clause and metadata configured in a New IP header, the network node 400 should perform actions as described by mechanism 200.
  • the requested CIR may satisfy CIR ⁇ R ingress ⁇ —
  • the network node 400 may then obtain the upper bound latency for requested LGS and add the upper bound latency to T otal_Latency .
  • the network node 400 compares Total_Latency with Deadline . When Total_Latency is smaller than Deadline and adding the requested CIR does not exceed the configured total ingress nor egress rate, the Admissible bit for the current hop may be set to be one, and the requested CIR is recorded.
  • the residual ingress and egress rates are reduced by the requested CIR respectively.
  • T otal_Latency is larger than Deadline and adding the requested CIR does not exceed the configured total ingress and egress rate
  • the Admissible bit for the current hop is set to be zero.
  • the Admissible bit for the current hop is set to be zero.
  • the allowedCIR is set to be the lowest CIR that are allowed for the current hop and previous hops.
  • FIG. 5 is a schematic diagram of an example New IP packet 500 that may be employed to perform data plane signaling to reserve network resources.
  • the New IP packet 500 may be configured to contain the metadata to support signaling in mechanism 200.
  • the New IP packet 500 can be routed into priority queues 415 in network node 400.
  • the New IP packet 500 can be transmitted between a source 103 and a destination 109 via transit nodes 105 in network 100.
  • the New IP packet 500 can be received, processed, and transmitted by network element 300.
  • the New IP protocol is an IP based data plane protocol designed to support machine to machine communication and enhance user experience.
  • the New IP protocol uses a New IP packet 500 to perform these features.
  • a New IP packet 500 can be used to flexibly program network nodes via data plane signaling to provide to specified handling of a current packet, a current flow, and/or a set of current flows without relying on the control plane to setup such handling ahead of time.
  • New IP can coexist with traditional IP network architecture while providing additional functionality.
  • the New IP packet 500 comprises a header 501, a shipping specification 503, a contract 505, and a payload 507.
  • the New IP packet 500 has a variable length.
  • the header 501 includes a series of offsets that act as pointers.
  • a router receiving aNew IP packet 500 can review the offsets in the header 501 to determine the location of corresponding data in the New IP packet 500.
  • the header 501 may comprise a shipping pointer, a contract pointer, and a payload pointer that point to the starting bits of the shipping specification 503, the contract 505, and the payload 507, respectively.
  • the shipping specification 503 indicates the source address and destination address. Specifically, the shipping specification 503 employs a flexible addressing scheme.
  • the shipping specification 503 contains an address type field indicating the addressing format used.
  • the shipping specification 503 also contains one or more address cast field(s) indicating the nature of the communication, such as one to one, many to one, anycast, multicast, broadcast, coordinating casting, etc.
  • the shipping specification 503 also contains the source address, the destination address, and data indicating the lengths of the addresses.
  • the contract 505 is used to dynamically program each node to provide requested services to packets and/or flows.
  • Traditional IP networks attempt to guarantee service levels for users and/or flows in the aggregate, but can only retroactively determine whether such guarantees were actually met.
  • the contract 505 directs the node to treat the current packet in a prescribed manner, and hence can ensure that guarantees are met in real time on a packet by packet basis.
  • the contract 505 includes one or more contract clauses 510 and optionally metadata 511.
  • a contract clause 510 can include one or more of an event, a condition, and an action.
  • An event is used to describe a network event related to a condition and/or an action.
  • Such network events may include specified packet queue levels at a node, path outages, a late packet, a packet drop, a specified next hop, a packet count, etc.
  • Conditions are arithmetic and/or logical operators used to perform checks. Such conditions may include equals, less than, greater than or equal to, or, and, exclusive or (xor), etc.
  • An action is a step that a node should take, for example in response to an event and/or condition. Actions can include a wide range of network node functionality such as reporting events, dynamic routing changes, and/or packet prioritization.
  • Metadata 511 is any data that describes other data.
  • the payload 507 is the actual data transmitted from a source to a destination across the data plane.
  • a contract clause 510 can be used to reserve network resources (action) if/ when (condition) requested latency and/or requested CIR can be accommodated (event).
  • the contract clause 510 can be used to release reserved network resources (action) if/when (condition) a flow is not admissible for an entire path (event).
  • the contract clause 510 can be used to reserved network resources (action) if/when (condition) a flow is admissible for an entire path (event).
  • the New IP Packet 500 can be employed to implement a QoS resource reservation request packet and/or a QoS resource reservation response packet.
  • the metadata 511 can be used to store parameters relating to the event.
  • the metadata 511 can indicate the requested latency and/or CIR, latency and/or allowed CIR from previous hops, and/or suggested latency and/or suggested CIR for future requests.
  • metadata 511 that can be employed as part of a QoS resource reservation request packet and/or a QoS resource reservation response packet as described in more detail below.
  • FIG. 6 illustrates an example of resource reservation request metadata 600 that can be employed to request that network resources be reserved via data plane signaling.
  • the metadata 600 can be employed in a metadata 511 in a New IP packet 500.
  • the metadata 600 can be used to implement mechanism 200 in a source 103, a transit node 105, a destination 109, a network element 300, and/or a network node 400.
  • the metadata 600 can be used to support a request to reserve bandwidth and/or to request a latency guarantee along a path between a source and a destination.
  • the metadata can be included in any type of QoS resource reservation request packet, such as an LGS request packet.
  • the metadata 600 comprises the following fields: a flow identification method (FlowIdMethod) 601, a hop number (Hop num) 603, a unit 605, a total maximum latency (Total Max Latency) 607, a requested deadline (Deadline) 609, a CIR 611, an allowed CIR (allowedCIR) 613, and Admissible 615.
  • the FlowIdMethod 601 can be used to denote a method of identifying the flow for example as an individual flow (e.g., 0), Transmission Control Protocol (TCP) flows (e.g., 1), User Datagram Protocol (UDP) flows (e.g., 2), all flows (e.g., 3), and flows with the same DSCP bits (e.g., 4).
  • An individual flow may be a non-IPSec flow or an IPsec flow.
  • a non-IPSec individual flow may be identified by source and destination address, source and destination port number, and protocol number.
  • An IPSec individual flow can be identified by source and destination address and a flow label. End-to-end latency can be guaranteed for a dedicated IP flow.
  • TCP flows can be identified by source and destination address and TCP protocol number. End-to-end latency can be guaranteed for all TCP flows that have the same source and destination address.
  • UDP flows can be identified by source and destination address and UDP protocol number. End-to-end latency can be guaranteed for all UDP flows that have the same source and destination address. All flows can be identified by source and destination address. End-to-end latency can be guaranteed for all IP flows that have the same source and destination address.
  • DSCP flows are flows that have been assigned DSCP bits according to the Diffserv network architecture as described relative to network node 400. A DSCP flow can be identified by the DSCP bits. End-to-end latency can be guaranteed for all IP flows that have the same DSCP bits.
  • Hop num 603 can be used to indicate the total number of hops on the path from the source/client to the destination/server.
  • the Hop_num 603 should be decremented (or incremented depending on the example) at each hop after processing.
  • the unit 605 can indicate the unit of latency, for example zero for millisecond (ms) or one for microseconds (us).
  • the Total Max Latency 607 can indicate the maximum value of the total latency accumulated along the path. Total Max Latency 607 can be increased at each intermediate node by the maximum per-hop latency that is estimated by the corresponding intermediate node.
  • the Deadline 609 can indicate the upper bound of the end-to-end latency as requested by the source/client.
  • the CIR 611 can indicate the CIR in bits per second (bps) as requested by the client.
  • the allowedCIR 613 can indicate the CIR that can be allowed by the path.
  • the allowedCIR 613 can be determined by considering the maximum CIR that can be allowed at each router and selecting the minimum value among such maximum values. This can be determined by having each node that fails to admit the flow determine that node’s maximum allowable CIR. The node can then set the allowedCIR 613 to that node’s maximum allowable CIR when that nodes maximum allowable CIR is lower than the value currently stored in the allowedCIR 613.
  • the Admissible 615 can include a number of bits equal to the number of hops along the path between a source and destination.
  • Each bit of Admissible 615 can be set to indicate whether the flow can be admitted by a corresponding intermediate node.
  • the first hop can be indicated by the most significant bit and the last hop can be indicated by the least significant bit.
  • this set of metadata are also possible, and different fields could be added or omitted.
  • FIG. 7 illustrates an example of resource reservation response metadata 700 that can be employed to communicate a result of a resource reservation request via data plane signaling.
  • the metadata 700 can be employed in a metadata 511 in a New IP packet 500.
  • the metadata 700 can be used to implement mechanism 200 in a source 103, a transit node 105, a destination 109, a network element 300, and/or a network node 400.
  • the metadata 700 can be included in a QoS resource reservation response packet in response to a QoS resource reservation request packet containing metadata 600.
  • the metadata 700 can be used to report the results of a request to reserve bandwidth and/or a request to provide a latency guarantee along a path between a source and a destination.
  • the metadata can be included in any type of QoS resource reservation response packet, such as an LGS response packet, sent from a destination back to a source along a reverse path to indicate the results of a resource reservation request.
  • the metadata 700 comprises the following fields: a successful (S) 701, a flow priority (FlowPriority) 703, a reverse path (ReversePath) 705, arecommended deadline (RecommendedDeadline) 707, and arecommended CIR (RecommendedCIR) 709.
  • the S 701 can be used to indicated whether the LGS setup and/or the CIR setup is successful or not.
  • S 701 can be set to true when the flow was admitted to all nodes along the path at the requested parameters and false when the flow was not admitted by at least one node along the path.
  • the FlowPriority 703 can be used to notify the source/client of the priority assigned to the flow.
  • the FlowPriority 703 can indicate a DSCP of the flow when S 701 is true.
  • the FlowPriority 703 can include or otherwise point to a New IP header that indicates such a priority.
  • the LGS flows may be marked with a DSCP value that specifies an LGS priority as described with respect to priority queues 415 in network node 400.
  • the DSCP value ensures that the LGS flows have the highest priority to be scheduled at the egress (e.g., after EF).
  • the FlowPriority 703 can be omitted when S 701 is false.
  • the ReversePath 705 can be used to record the path back to the client/source, which follows the reverse path from the client/source to the server/destination. Accordingly, the ReversePath 705 can be used to ensure that the QoS resource reservation response packet is received by each of the nodes that previously handled the QoS resource reservation request packet.
  • the RecommendedDeadline 707 may be used to indicate the value of Total Max Latency 607 from the QoS resource reservation request packet/ LGS setup packet when S 701 is false. Accordingly, the RecommendedDeadline 707 indicates to the source/client a latency deadline that is likely to be acceptable for a further request. When S 701 is true, the RecommendedDeadline 707 is not needed (as the resource allocation for the flow is already successfully setup) and can be omitted.
  • the Recommend edCIR 709 is used to indicate the value of allowedCIR 613 from the QoS resource reservation request packet/ LGS setup packet when S 701 is false.
  • the RecommendedCIR 709 indicates to the source/client a CIR value that is likely to be acceptable for a further request.
  • the RecommendedCIR 709 is not needed (as the resource allocation for the flow is already successfully setup) and can be omitted.
  • this set of metadata are also possible, and different fields could be added or omitted.
  • FIG. 8 is a flowchart of an example method 800 of requesting reservation of network resources at a source via data plane signaling.
  • method 800 may be employed to implement mechanism 200 in a source 103, a client 101, a server 107, a transit node 105, a network element 300, and/or a network node 400.
  • method 800 may employ one or more New IP packets 500, for example to communicate resource reservation request metadata 600 and/or resource reservation response metadata 700.
  • Method 800 may be implemented by a client/source that determines to request a reservation of network resources. Accordingly, the method 800 is described from the perspective of the client/source for clarity of discussion.
  • Method 800 begins when a source/client determines to request that network resources be allocated for communicating one or more flows along a path between the source/client and a destination/server.
  • the source transmits a QoS resource reservation request packet along the path to the destination via data plane signaling.
  • the QoS resource reservation request packet requests QoS resources be provisioned to guarantee certain minimum service be applied when a flow is communicated along the path.
  • the QoS resource reservation request packet is a CIR/bandwidth request packet that requests a CIR be applied by each node along the path to the destination.
  • the QoS resource reservation request packet can request a minimum CIR, bandwidth, and/or bit rate be made available by a path for handling a flow.
  • the QoS resource reservation request packet may be an LGS request packet that requests an LGS with a total latency along the path that is equal to or smaller than a deadline.
  • the QoS resource reservation request packet can request that a path communicate a flow while ensuring that each packet is communicated between the client/source and the destination/server with latency that is less than or equal to the deadline.
  • the QoS resource reservation request packet is referred to as a LGS request packet and includes both a CIR/bandwidth request and an LGS request.
  • the QoS resource reservation request packet may include one or more of resource reservation request metadata 600.
  • the QoS resource reservation request packet may comprise metadata including a CIR field containing a CIR requested indicating the CIR requested by the source and an allowed CIR field indicating a CIR continuously allowable along the path.
  • the CIR field can include a value set by the source and the allowed CIR field can be set to a default value (e.g., set equal to the CIR requested) and updated by transit nodes as the packet traverses the path to the destination.
  • Such updating may include reducing the value in the allowed CIR field to a maximum value of allocable CIR at a corresponding node when the corresponding node is not capable of providing the entire CIR requested in order to determine the maximum value of continuously allocable CIR along the entire path.
  • the CIR field and the allowed CIR field can be used to implement the C I R/bandwid th/bit rate resource reservation along the path.
  • the QoS resource reservation request packet comprises metadata containing a deadline field indicating a requested upper bound for an end-to-end latency between the source and the destination and a total maximum latency field indicating an accumulated latency along the path.
  • the deadline field can include a value set by the source and the total maximum latency field can be set to a default value (e.g., set equal zero) and updated by transit nodes as the packet traverses the path to the destination.
  • Such updating may include increasing the value in the total maximum latency field at each node by an amount of latency that the corresponding node projects will be added by the node to a flow packet communicated via the node.
  • the deadline field and the total maximum latency field can be used to reserve resources to implement the latency guarantee along the path.
  • the QoS resource reservation request packet includes the CIR field, the allowed CIR field, the deadline field, and the total maximum latency field, and hence reserves resources for both CIR and LGS guarantees.
  • the QoS resource reservation request packet may also contain additional metadata.
  • the QoS resource reservation request packet may contain metadata including an admissible field indicating whether the flow is admitted at each hop along the path.
  • the admissible field may include a bit for each transit node along the path between the source and the destination. Each node can set a corresponding bit to zero/one, true/false, etc., to indicate whether the corresponding node can admit the flow based on the terms requested by other metadata
  • the QoS resource reservation request packet may also contain other metadata, such as an indication of the type of flow(s), a hop counter, a unit of latency, etc., as described with respect to resource reservation request metadata 600.
  • the source receives, a QoS resource reservation response packet indicating results of the QoS resource reservation request packet.
  • the results of the resource reservation request may be contained in metadata in the QoS resource reservation response packet.
  • the QoS resource reservation response packet contains metadata including an S field indicating whether LGS setup is successful for the flow.
  • the S field can be set by the destination as one/zero, true/false, etc., to indicate whether all of the requested values can be supported by all of the nodes along the path, and hence to indicate whether the entire path can support the requested values.
  • the S field can indicate whether all nodes along the path admitted the flow and/or whether such nodes have allocated corresponding resources for the flow.
  • the QoS resource reservation response packet may contain metadata including a flow priority field that indicates a priority for the flow. Such a flow priority can be assigned by the destination. For example, when the Diffserv model is employed, the flow priority may be set to LGS and/or BGS. The flow priority field may only be included when the flow setup is successful.
  • the QoS resource reservation response packet contains metadata including a reverse path field indicating a reverse path from the destination to the source (e.g., to ensure the QoS resource reservation response packet traverses the same set of nodes that handled the QoS resource reservation request packet and releases corresponding resources when flow setup is not successful).
  • the QoS resource reservation response packet contains metadata including a recommended deadline field indicating a recommended deadline set based on a value of the total maximum latency in the QoS resource reservation request packet at the destination.
  • the recommended deadline field may only be included when the flow setup is not successful.
  • the QoS resource reservation response packet may contain metadata including a recommended CIR field indicating a recommended CIR set based on a value of the allowed CIR in the QoS resource reservation request packet at the destination.
  • Step 805 is an optional step that may be omitted when flow setup is successful.
  • the source may employ step 805 to make another attempt to set up the flow based on feedback from the QoS resource reservation response packet.
  • the source may transmit a second QoS resource reservation request packet with a CIR requested by the source set based on the recommended CIR in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful.
  • the second QoS resource reservation request packet may comprise a deadline set based on the recommended deadline in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful.
  • the source can then wait and receive a second QoS resource reservation response packet in response to the second QoS resource reservation request packet.
  • the first and/or second QoS resource reservation request packet and the first and/or second QoS resource reservation response packet may each be included in a corresponding New IP packet.
  • FIG. 9 is a flowchart of an example method 900 of performing resource reservation at a destination by employing data plane signaling.
  • method 900 may be implemented at a destination/server in response to the packets communicated in method 800. Accordingly, method 900 may be employed to implement mechanism 200 in a destination 109, a client 101, a server 107, atransit node 105, anetwork element 300, and/or a network node 400. Further, method 900 may employ one or more New IP packets 500, for example to communicate resource reservation request metadata 600 and/or resource reservation response metadata 700. Method 900 may be implemented by a destination/server that determines whether a request to reserve network resources has been successful.
  • Method 900 begins when a network element, acting as a transit node and/or destination, receives a request to reserve network resources to support a CIR and/or LGS.
  • the network element receives a QoS resource reservation request packet from a source via data plane signaling along a path.
  • the QoS resource reservation request packet requests QoS resources be provisioned to guarantee certain minimum service be applied when a flow is communicated along the path.
  • the QoS resource reservation request packet is a CIR/bandwidth request packet that requests a CIR be applied by each node along the path to the destination.
  • the QoS resource reservation request packet can request a minimum CIR, bandwidth, and/or bit rate be made available by a path for handling a flow.
  • the QoS resource reservation request packet may be an LGS request packet that requests an LGS with a total latency along the path that is equal to or smaller than a deadline.
  • the QoS resource reservation request packet can request that a path communicate a flow while ensuring that each packet is communicated between the client/source and the destination/server with latency that is less than or equal to the deadline.
  • the QoS resource reservation request packet is referred to as an LGS request packet and includes both a CIR/bandwidth request and an LGS request.
  • the QoS resource reservation request packet may include one or more of resource reservation request metadata 600.
  • the QoS resource reservation request packet may comprise metadata including a CIR field containing a CIR requested indicating the CIR requested by the source and an allowed CIR field indicating a CIR continuously allowable along the path.
  • the CIR field can include a value set by the source and the allowed CIR field can be set to a default value (e.g., set equal to the CIR requested) and updated by transit nodes as the packet traverses the path to the destination.
  • Such updating may include reducing the value in the allowed CIR field to a maximum value of allocable CIR at a corresponding node when the corresponding node is not capable of providing the entire the CIR requested in order to determine the maximum value of continuously allocable CIR along the entire path.
  • the CIR field and the allowed CIR field can be used to implement the C I R/bandwid th/bit rate resource reservation along the path.
  • the QoS resource reservation request packet comprises metadata containing a deadline field indicating a requested upper bound for an end-to-end latency between the source and the destination and a total maximum latency field indicating an accumulated latency along the path.
  • the deadline field can include a value set by the source and the total maximum latency field set to a default value (e.g., set equal zero) and updated by transit nodes as the packet traverses the path to the destination. Such updating may include increasing the value in the total maximum latency field at each node by an amount of latency that the corresponding node projects will be added by the node to a flow packet communicated via the node.
  • the deadline field and the total maximum latency field can be used to reserve resources to implement the latency guarantee along the path.
  • the QoS resource reservation request packet includes the CIR field, the allowed CIR field, the deadline field, and the total maximum latency field, and hence reserves resources for both CIR and LGS guarantees.
  • the QoS resource reservation request packet may also contain additional metadata.
  • the QoS resource reservation request packet may contain metadata including an admissible field indicating whether the flow is admitted at each hop along the path.
  • the admissible field may include a bit for each transit node along the path between the source and the destination. Each node can set a corresponding bit to zero/one, true/false, etc., to indicate whether the corresponding node can admit the flow based on the terms requested by other metadata
  • the QoS resource reservation request packet may also contain other metadata, such as an indication of the type of flow(s), a hop counter, a unit of latency, etc., as described with respect to resource reservation request metadata 600.
  • the network element can process the QoS resource reservation request packet. For example, when the network element is a destination, the network element determines whether the flow is admissible along the entire path based on the admissible field and generates a QoS resource reservation response packet based on the QoS resource reservation request packet. In another example, when the network element is a transit node, the network element updates relevant metadata and receives a responsive QoS resource reservation response packet from the destination. In either case, the network element transmits a QoS resource reservation response packet toward the source indicating results of the QoS resource reservation request packet. [0088] In an example, the QoS resource reservation response packet contains metadata including an S field indicating whether LGS setup is successful for the flow.
  • the S field can be set by the destination as one/zero, true/false, etc., to indicate whether all of the requested values can be supported by all of the nodes along the path, and hence to indicate whether the entire path can support the requested values. Further, the S field can indicate whether all nodes along the path admitted the flow and/or whether such nodes have allocated corresponding resources for the flow.
  • the QoS resource reservation response packet may contain metadata including a flow priority field that indicates a priority for the flow. Such a flow priority can be assigned by the destination. For example, when the Diffserv model is employed, the flow priority may be set to LGS and/or BGS. The flow priority field may only be included when the flow setup is successful.
  • the QoS resource reservation response packet contains metadata including a reverse path field indicating a reverse path from the destination to the source (e.g., to ensure the QoS resource reservation response packet traverses the same set of nodes that handled the QoS resource reservation request packet and releases corresponding resources when flow setup is not successful).
  • the QoS resource reservation response packet contains metadata including a recommended deadline field indicating a recommended deadline set based on a value of the total maximum latency in the QoS resource reservation request packet at the destination. The recommended deadline field may only be included when the flow setup is not successful.
  • the QoS resource reservation response packet may contain metadata including a recommended CIR field indicating a recommended CIR set based on a value of the allowed CIR in the QoS resource reservation request packet at the destination.
  • Step 905 is an optional step that may be omitted when flow setup is successful.
  • the source may make another attempt to setup the flow based on parameters/feedback from the QoS resource reservation response packet.
  • the network element may receive a second QoS resource reservation request packet with a CIR requested by the source set based on the recommended CIR in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful.
  • the second QoS resource reservation request packet may comprise a deadline set based on the recommended deadline in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful.
  • the network element may generate and send a second QoS resource reservation response packet based on the second QoS resource reservation request packet.
  • the network element may receive and forward a second QoS resource reservation response packet from the destination based on the second QoS resource reservation request packet.
  • the first and/or second QoS resource reservation request packet and the first and/or second QoS resource reservation response packet may each be included in a corresponding New IP packet.
  • FIG. 10 is a flowchart of an example method 1000 of performing resource reservation at an intermediate network node and/or a destination by employing data plane signaling.
  • Method 1000 is an example implementation of the packet handling functions of method 900.
  • method 1000 may be implemented at a destination/server in response to the packets communicated in method 800. Accordingly, method 1000 may be employed to implement mechanism 200 in a destination 109, a client 101, a server 107, a transit node 105, a network element 300, and/or a network node 400. Further, method 1000 may employ one or more New IP packets 500, for example to communicate resource reservation request metadata 600 and/or resource reservation response metadata 700.
  • Method 1000 may be implemented by a destination/server that determines whether a request to reserve network resources has been successful. Accordingly, the method 1000 is described from the perspective of the destination/server for clarity of discussion. However, corresponding packets are communicated across a network, and hence a transit node may also implement method 1000 in some examples.
  • Method 1000 begins when a network element, acting as a transit node and/or destination, receives a request to reserve network resources to support a CIR and/or LGS.
  • the network element receives an LGS resource reservation request packet via data plane signaling between a source and a destination.
  • the LGS resource reservation request packet requests provisioning of network element resources to guarantee an end-to-end latency for a flow.
  • the network element then processes the LGS resource reservation request packet.
  • the network element determines a total latency for the LGS resource reservation request packet by adding a maximum latency generated by the network element plus an accumulated total maximum latency metadata from the LGS resource reservation request packet.
  • the network element also determines deadline metadata from the LGS resource reservation request packet. The data from steps 1003 and 1005 provide information to support a determination of whether the flow can be admitted while providing LGS.
  • the network element also determines a requested CIR metadata from the LGS resource reservation request packet.
  • the network element also determines a total ingress rate and a total egress rate at the network element to support a determination of whether the flow can be admitted while providing CIR/bandwidth guarantees.
  • the network element determines whether to admit the flow from a CIR perspective based on a comparison of the requested CIR metadata, the total ingress rate, and the total egress rate.
  • the network element also determines whether to admit the flow from an LGS perspective based on a comparison of the total latency and the deadline metadata. For example, the network can admit the flow when the total latency for the LGS resource reservation request is less than or equal to the deadline metadata, when the requested CIR metadata does not exceed the total allocable ingress rate, and when the requested CIR metadata does not exceed the total allocable egress rate.
  • the network element can also allocate resources for the flow when the flow is admitted.
  • the network element can also set/update the metadata in the LGS resource reservation request packet based on the admission decision. For example, the network element can set an admissible metadata in the LGS resource reservation request packet to an admitted value when the flow is admitted. Further, the network element can set an admissible metadata in the LGS resource reservation request packet to a denied value when the total latency for the LGS resource reservation request is greater than the deadline metadata. In addition, the network element can set an allowed CIR metadata in the LGS resource reservation request to a lowest allowed CIR value along a path between the source and the network element when the requested CIR metadata exceeds the total ingress rate or the total egress rate.
  • the network element can set the allowed CIR metadata to the allowed CIR value at the network element when such a value is lower than the value already contained in the allowed CIR metadata.
  • the network element can also update the accumulated total maximum latency metadata from the LGS resource reservation request packet with the total latency as determined in step 1003.
  • the network element can forward the LGS resource reservation request packet toward the destination at step 1013.
  • the LGS resource reservation request packet can be transmitted toward the destination.
  • the LGS resource reservation request packet can be forwarded toward the relevant components for flow setup.
  • the network element can receive and/or generate a LGS response packet indicating results of the LGS resource reservation request packet, depending on whether the network element is a transit node or a destination.
  • the network element can release any resources allocated for the flow and transmit the LGS response packet toward the source.
  • the network element can generate the LGS response packet by setting a S field based on the admissible field in the LGS resource reservation request packet.
  • the network element can also set the RecommendedDeadline and RecommendedCIR based on the Total Max Latency and allowedCIR, respectively, from the LGS resource reservation request packet when the flow is not admissible.
  • FIG. 11 is a schematic diagram of an example system 1100 for performing resource reservation by employing data plane signaling, for example according to mechanism 200, method 800, method 900, and/or method 1000.
  • the system 1100 may be implemented on a client 101, a source 103, a node 105, a server 107, a destination 109, a network element 300, and/or a network node 400.
  • the system 1100 can generate, transmit, process, and/or receive a packet, such as a New IP packet 500, including resource reservation request metadata 600 and/or resource reservation response metadata 700.
  • the system 1100 may be implemented as a source requesting a resource reservation.
  • the source includes a storing module 1105 for storing a QoS resource reservation request packet and/or a QoS resource reservation response packet.
  • the source also includes a transmitting module 1107 for transmitting a QoS resource reservation request packet along a path to a destination via data plane signaling, wherein the QoS resource reservation request packet requests QoS resources be provisioned for a flow.
  • the source also includes a receiving module 1101 for receiving a QoS resource reservation response packet indicating results of the QoS resource reservation request packet.
  • the system 1100 may be further configured to perform any of the steps of method 800.
  • the system 1100 may be implemented as a node receiving a resource reservation request, such as a transit node and/or a destination.
  • the node includes a storing module 1105 for storing a QoS resource reservation request packet and/or a QoS resource reservation response packet.
  • the node also includes a receiving module 1101 for receiving a QoS resource reservation request packet from a source via data plane signaling along a path, wherein the QoS resource reservation request packet requests QoS resources be provisioned for a flow.
  • the node also includes a transmitting module 1107 for transmitting a QoS resource reservation response packet indicating results of the QoS resource reservation request packet toward the source.
  • the system 1100 may be further configured to perform any of the steps of method 900.
  • the system 1100 may be implemented as a node receiving a resource reservation request, such as a transit node and/or a destination.
  • the node includes a storing module 1105 for storing a QoS resource reservation request packet and/or a QoS resource reservation response packet.
  • the node also includes a receiving module 1101 for receiving a LGS resource reservation request packet via data plane signaling between a source and a destination, wherein the LGS resource reservation request packet requests provisioning of network element resources to guarantee an end-to-end latency for a flow.
  • the node also includes a determining module 1103 for determining a total latency for the LGS resource reservation request packet by adding a maximum latency generated by the network element plus an accumulated total maximum latency metadata from the LGS resource reservation request packet, determining a deadline metadata from the LGS resource reservation request packet, and determining whether to admit the flow based on a comparison of the total latency and the deadline metadata.
  • the node also includes a transmitting module 1107 for transmitting the LGS resource reservation request packet toward the destination.
  • the system 1100 may be further configured to perform any of the steps of method 1000.
  • a first component is directly coupled to a second component when there are no intervening components, except for a line, a trace, or another medium between the first component and the second component.
  • the first component is indirectly coupled to the second component when there are intervening components other than a line, a trace, or another medium between the first component and the second component.
  • the term “coupled” and its variants include both directly coupled and indirectly coupled. The use of the term “about” means a range including ⁇ 10% of the subsequent number unless otherwise stated.

Abstract

A resource reservation mechanism is disclosed. The mechanism includes transmitting, by a source, a quality of service (QoS) resource reservation request packet along a path to a destination via data plane signaling. The QoS resource reservation request packet requests QoS resources be provisioned for a flow. The source then receives a QoS resource reservation response packet indicating results of the QoS resource reservation request packet.

Description

In-Band Signaling For Latency Guarantee Service (LGS)
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This patent application claims the benefit of U.S. Provisional Patent Application No. 63/106,632 filed October 28, 2020 by Lijun Dong, et al., and titled “Method and Apparatus of New IP Enabled In-Band Signaling for Precise Latency Guarantee Service” and U.S. Provisional Patent Application No. 63/165,629, filed March 24, 2021 by Lijun Dong, et al., and titled “In- Band Signaling For Latency Guarantee Service (LGS),” which are hereby incorporated by reference.
TECHNICAL FIELD
[0002] The present disclosure is generally related to network communications, and is specifically related to mechanisms for reserving network resources to provide a quality of service (QoS) and/or an LGS using data plane signaling.
BACKGROUND
[0003] Various mechanisms may be employed to reserve network resources to support fulfillment of QoS requirements. However, such mechanisms employ a control plane to allocate resources. Such mechanisms may be complicated, and hence may not scale. Such mechanisms may also be difficult to implement in a multi-network network domain environment, for example due to confidentiality and/or security concerns. For example, a control plane in anetwork domain may not share internal network resource configurations with a control plane in another network domain to for increased security. Further, control plane based resource reservation mechanisms may require a setup and teardown process that prevents usage for dynamic flows. For example, each control plane in each network domain may have a separate setup and teardown process that does not share resource information across network domain boundaries. Performing such reservations processes consumes both time and network resources, and hence may not be used with unexpected communications. In addition, the success of such mechanisms may not be monitored and/or managed in real time. Instead, such mechanisms may only be monitored after communications have occurred, for example by performing statistical analysis of past communication activity to determine whether service level agreements (SLAs) have been met. SUMMARY
[0004] [To be completed upon inventor approval.]
[0005] For the purpose of clarity, any one of the foregoing embodiments may be combined with any one or more of the other foregoing embodiments to create a new embodiment within the scope of the present disclosure.
[0006] These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
[0008] FIG. 1 is a schematic diagram of an example network configured to reserve network resources for QoS, Committed Information Rate (CIR), and/or LGS in response to data plane signaling.
[0009] FIG. 2 is a protocol diagram of an example mechanism of employing data plane signaling to reserve network resources for QoS, CIR, and/or LGS.
[0010] FIG. 3 is a schematic diagram of an example network element.
[0011] FIG. 4 is a schematic diagram of an example network node configured to perform traffic classification.
[0012] FIG. 5 is a schematic diagram of an example New Internet Protocol (IP) packet that may be employed to perform data plane signaling to reserve network resources.
[0013] FIG. 6 illustrates an example of resource reservation request metadata that can be employed to request network resources be reserved via data plane signaling.
[0014] FIG. 7 illustrates an example of resource reservation response metadata that can be employed to communicate a result of a resource reservation request via data plane signaling. [0015] FIG. 8 is a flowchart of an example method of requesting reservation of network resources at a source via data plane signaling.
[0016] FIG. 9 is a flowchart of an example method of performing resource reservation at a destination by employing data plane signaling.
[0017] FIG. 10 is a flowchart of an example method of performing resource reservation at an intermediate network node and/or a destination by employing data plane signaling. [0018] FIG. 11 is a schematic diagram of an example system for performing resource reservation by employing data plane signaling.
DETAILED DESCRIPTION
[0019] It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
[0020] Disclosed herein is an in-band signaling mechanism to reserve network resources to support QoS requirements. As used herein, in-band indicates the resource reservation occurs across the data plane, for example via a communication between a source and a destination, such as a client and a server, via a network of nodes/hops/network elements. The source creates a QoS resource reservation request packet, such as an LGS request packet. The source forwards the QoS resource reservation request packet toward the destination. The QoS resource reservation request packet contains metadata describing the parameters of the service requirements requested for one or more corresponding flows. The nodes along the path check the metadata, determine whether the flow can be admitted on the requested terms, allocate resources, update metadata, and/or forward the QoS resource reservation request packet toward the destination. The destination receives the QoS resource reservation request packet, determines whether the resource reservation was successful, and sends a QoS resource reservation response packet, such as an LGS response packet back to the source on the reverse path. The nodes can release any allocated resources when resource reservation is not successful. Alternatively, the nodes can wait to formally allocate resources until the reservation response packet indicates the resource reservation/flow admission is successful for the entire path.
[0021] When resource reservation is not successful, the source can then try again using data from the resource reservation response packet. For example, the source can request resources that are known to be available based on the resource reservation response packet. As an example, the request packet can contain metadata indicating a latency deadline and a total maximum latency accumulated along the path. As another example, the request packet can contain metadata indicating a requested Committed Information Rate (CIR) and an allowed CIR indicating the minimum allowed CIR among hops along the path. The request packet can also contain metadata indicating whether each hop along the path has admitted the flow. The destination may use such data to set a flow priority, determine whether the setup is successful, and/or determine a reverse path. The flow priority, setup success, and/or reverse path can then be added to the QoS resource reservation response packet. Further, when setup is not successful, the destination can include a recommended deadline into the response packet based on the total maximum latency in the request packet. In another example, when setup is not successful, the destination can include a recommended CIR into the response packet based on the allowed CIR from the request packet. Accordingly, the described mechanisms allow for near real time resource reservation via the data plane in a way that is scalable and may cross network domain boundaries. Further, the described mechanisms may reduce processor, memory, and/or network resource usage at the source, destination, and/or intermediate nodes by avoiding complex setup and tear-down protocols employed by a control plane. Hence, the described examples solve various problems specific to network communications and increase the functionality and/or operability of network communication devices.
[0022] FIG. 1 is a schematic diagram of an example network 100 configured to reserve network resources for QoS, CIR, and/or LGS in response to data plane signaling. The network 100 includes clients 101, servers 107, and transit nodes 105. A client 101 is any device that requests communication of data from a server 107. A server 107 is any device that serves data to a client 101. For example, a server 107 may maintain data and/or may be configured to process data provided by a client 101. A client 101 may contact a server 107 to request data and/or to provide data to be processed. The server 107 processes the client’s 101 request and provides the requested data to the requesting client 101. The server 107 may also contact other clients and/or servers 107 at the requesting client’s 101 request. The data is passed between the server(s) 107 and the client(s) 101 in flows 102. As an example, a client 101 may be a computer, television, tablet, smart phone, or other internet connected media device and the server 107 may be a computer system, cloud based virtual machine, or other computing device configured to serve flows 102 of video data for viewing by the client 101. In another example, the client 107 is a computing device capable of connecting to an audio and/or video teleconference and the sever 107 is a computing device configured to manage the teleconference and connect the client 101 to other clients 101. The forgoing are provided as examples for clarity of discussion. However, it should be noted that clients 101 and servers 107 provide a broad range of functionality. Accordingly, the forgoing examples should not be considered limiting. For purposes of the present application, the focus is that clients 101 and servers 107 communicate flows 102 of data across the network. [0023] The network 100 further comprises various transit nodes 105. The transit nodes 105 are configured to connect the clients 101 and the servers 107. A transit node 105 is any communication device configured to receive data on a first set of interface(s) and communicate such data over a second set of interface(s). For example, a transit node 105 may be implemented as a repeater, a switch, a router, or other network communication device. The transit nodes 105 can be connected in any configuration. Further, the transit nodes 105 may form a network domain and/or may be spread across multiple network domains. The transit nodes 105 communicate requests and/or replies between the clients 101 and servers 107 to setup communication sessions. The transit nodes 105 also forward data flows 102 between the clients 101 and servers 107. A flow 102 is related sequence of packets between two or more nodes in a network 100.
[0024] For clarity of discussion, the network 100 includes a source 103 and a destination 109. The source 103 is any device that initiates a communication with the destination 109 to setup a flow 102 between the source 103 and the destination 109. The source 103 may be a client 101 and the destination 109 may be a server 107, or vice versa. Further, the source 103 may be the flow source or the flow destination. Likewise, the destination 109 may be the flow destination or the flow source. Source 103 simply denotes the device that determines to reserve resources across the network. Destination 109 denotes the device that acts as an end point for such a resource reservation.
[0025] In some networks, a source communicates with a control plane to request that resources be reserved between two end points. The control plane then determines a path between the end points and allocates such resources along the path. In the event that multiple domains are involved, the control plane in a first domain allocates resources across the first domain, signals the allocation to the affected nodes, and contacts a second domain. The control plane in the second domain then allocates resources across the second domain, signals the allocation to the affected nodes, and contacts a third domain, etc. Such an approach is time consuming and only works well for flows that are very predictable and/or that are setup in advance. In addition, such a system utilizes a large amount of signaling resources just to setup and tear down resource allocations for each flow. Further, such an approach suffers degradations as a network is scaled to include many active users. For example, when many sources constantly start and stop flows, the control plane can become overwhelmed by a signaling storm.
[0026] As noted above, most networks use the control plane to allocate resources and the data plane to forward network traffic. As used herein, the control plane is the portion of the network 100 that is configured to maintain network topology and determine path routing. Accordingly, the control plane generally configures the data plane. As used herein, the data plane is the portion of the network 100 that is configured to forward packets.
[0027] The disclosed system avoids such complexity by setting up quality of service (QoS) resource allocations across the network via data plane signaling (and not by using control plane signaling). For example, the source 103 can transmit a QoS resource reservation request packet toward the destination 109 via the transit nodes 105. The QoS resource reservation request packet can indicate to the destination 109 and each transit node 105 on the path between the source 103 and the destination 109 that the source 103 is requesting network resources be allocated for use by one or more flows 102. Further, the QoS resource reservation request packet can contain metadata indicating relevant parameters related to the resource reservation, such as the type and amount of resources to be allocated. Each transit node 105 can determine whether enough resources are available to meet the request and hence whether the flow 102 can be admitted at the transit node 105. The transit node 105 then admits or denies the flow and indicates the decision in metadata in the QoS resource reservation request packet. In some examples, the transit node 105 allocates relevant resources when the flow 102 is admitted. The transit node 105 then forwards the QoS resource reservation request packet toward the destination 109, for example via additional transit nodes 105.
[0028] The destination 109 can receive the QoS resource reservation request packet and determine whether the flow 102 has been admitted to each transit node 105 along the path. The destination 109 can then generate and send a QoS resource reservation response packet back to the source 103 along a reverse path. The QoS resource reservation response packet contains metadata indicating whether flow 102 admission was successful or contains an indication of why admission was not successful along with suggested parameters for a subsequent QoS resource reservation request packet. In the example where transit nodes 105 allocate resources upon local admission at the node, the transit nodes 105 can release such allocated resources when the QoS resource reservation response packet indicates that flow 102 admission was not completely successful for the entire path. In other examples, the transit nodes 105 can wait and allocate resources only upon receiving a QoS resource reservation response packet indicating that flow 102 admission is successful for the entire path.
[0029] The source 103 can then begin transmitting or receiving the flow 102 upon receiving a QoS resource reservation response packet indicating a successful admission of the flow 102 along the path. Further, the source 103 can use parameters from the QoS resource reservation response packet to set parameters in another QoS resource reservation request packet when admission of the flow 102 was not successful. The source 103 can continue sending QoS resource reservation request packets until admission is successful or until the source determines that resource reservation is not possible, for example due to timeouts and/or poor results indicated by parameter values in a QoS resource reservation response packet.
[0030] The present mechanism can employ metadata to perform several different types of resource reservation requests. For example, the request packet can request latency guaranteed service (LGS) where the resource allocation guarantees a specified latency for all packets in the flow 102 between the source 103 and the destination 109. As another example, the request packet can request a committed information rate (CIR) where each transit node 105 allocates sufficient resources to the flow 102 to guarantee that flow 102 packets and processed and forwarded at a minimum data rate (e.g., bandwidth allocation). In addition, the present mechanism can be employed to set a priority level for handling the flow(s) 102. Other resource allocation schemes can also be accomplished with this mechanism.
[0031] As noted above, the present mechanism can employ only data plane signaling. As an example, a jumbo internet protocol (IP) packet, such as a packet containing IP version six (IPv6) extension headers and/or a packet containing IP version four (IPv4) options, can be employed to contain the relevant metadata. In another example, a New IP packet can be used to contain the relevant metadata. New IP is an emerging framework that may be leveraged to transport metadata along with the packet through the data plane via a New IP Metadata field. New IP packets have no maximum size. Further, New IP packets employ contract clause(s). A contract clause is a conditional directive that indicates to transit nodes 105 that an action, which may be defined in the packet, should be taken when a condition defined in the contract clause occurs. This allows for control functionality to be implemented in data plane signaling. New IP is discussed in greater detail below.
[0032] The present mechanism improves functionality of the network 100, the source 103, the destination 109, and related transit nodes 105. The present mechanism provides a simplified control mechanism for supporting flow level QoS based on a data flow loop instead of complicated control protocols such as resource reservation protocol (RSVP). The present mechanism can also coexist with other protocols, such as congestion control and operations administration and management (OAM) protocols. In addition, advance network features, such as path protection, non-shortest path, etc., can be enhanced independently while employing the present mechanism. Further, the present mechanism is agnostic of upper layer protocols and is not impacted by user data encryption and/or IP security (IPsec). The present mechanism is also backwards compatible and can coexist with other services. Also, the present mechanism can simplify and increase the efficiency of admission control and dynamic resource management. Further, the present mechanism increases the scalability and performance of the network 100. For example, signaling storm does not occur since the present mechanism does not rely on a control protocol. The signaling follows the data stream, and hence is end to end signaling and can cross domains. Signaling can occur once and/or periodically. Data flow functions can also be managed as part of path refreshment. Also, resources can be set to be automatically released upon relevant expiration periods and/or conditions. As such, the present mechanisms allow for near real time resource reservation via the data plane in a way that is scalable and may cross network domain boundaries. Further, the described mechanisms may reduce processor, memory, and/or network resource usage at the source 103, destination 109, and/or intermediate transit nodes 105 by avoiding complex setup and tear-down protocols employed by a control plane. Hence, the described examples solve various problems specific to network communications and increase the functionality and/or operability of network communication devices.
[0033] FIG. 2 is a protocol diagram of an example mechanism 200 of employing data plane signaling to reserve network resources for QoS, CIR, and/or LGS, for example between a source 103 and a destination 109 via transit nodes 105 in network 100. As used herein, QoS is a description or measurement of a guarantee of performance of a service by a network. Hence, QoS applies to a broad range of network resource allocation mechanisms. LGS is a guarantee of a maximum latency between a source and a destination for all packets in a flow. CIR is a guarantee of a minimum rate of data transfer (e. g. , bandwidth) by nodes along a path. Mechanism 200 can cause the allocation of resources to support many QoS related technologies, such as CIR and LGS.
[0034] At step 201 , the source requests a resource allocation along a path between the source and the destination. The source generates a QoS resource reservation request packet. The QoS resource reservation request packet may be implemented as a New IP packet or a jumbo IP packet. A QoS resource reservation request packet is any packet that request resource reservation to support QoS. For example, the QoS resource reservation request packet may be a CIR request packet and/or an LGS request packet. For example, the QoS resource reservation request packet can contain metadata indicating a deadline containing a requested maximum latency and a field for total maximum latency accumulated along the path to support LGS. Further, the QoS resource reservation request packet may contain a requested CIR as well as an allowed CIR field indicating the minimum CIR that all intermediate nodes can allocate to the flow. The total maximum latency and allowed CIR may be set to default values at the source and updated by the transit nodes. [0035] At step 202, the QoS resource reservation request packet is forwarded to the first node along the path. At step 203, the first node determines whether to admit the flow(s) indicated in the QoS resource reservation request packet. For example, the first node sets the total maximum latency to the latency added by the hop between the source and the first node and compares the total maximum latency to the deadline. When the total maximum latency is smaller than or equal to the deadline, the first node may update an admissible bit in the metadata in the QoS resource reservation request packet to indicate that the flow is admitted in the first node. When the total maximum latency is larger than the deadline, the first node may update the admissible bit to indicate that the flow is not admitted at the first node. Further, at this point the first node optionally can also remove the deadline from the metadata of the packet, because that deadline can no longer be met.
[0036] In another example, the first node compares the amount of bandwidth available at the first node (e.g., ingress and egress rates) to the requested CIR. When the available bandwidth is greater than or equal to the requested CIR, the first node may update the admissible bit to indicate that the flow is admitted at the first node. When the available bandwidth is less than the requested CIR, the first node may update the admissible bit to indicate that the flow is not admitted at the first node. When the flow is not admitted, the first node may also compare the allowed CIR in the metadata to the amount of available bandwidth at the first node. When the available bandwidth at the first node is less than the allowed CIR in the metadata, the first node may set the allowed CIR in the metadata to the available bandwidth at the first node. This allowed CIR in the metadata can optionally replace the requested CIR in the packet, because that requested CIR can no longer me provided. The admissible bit indicating that the flow is not admitted can also indicate that the metadata indicates the hypothetically allowed CIR, instead of the originally requested CIR.
[0037] Alternatively, the first node can optionally only update the total maximum latency and/or the CIR according to conditions at the first node, without use of an admissible bit. In such a situation, the final destination can determine admissibility based on the final latency/CIR compared to the requested latency/CIR, when all values are carried in the metadata.
[0038] In another example, the mechanism 200 employs both CIR and LGS. An example algorithm for employing both CIR and LGS is as follows. An LGS flow is admissible when the following three conditions are satisfied. The first condition is that, for the router ingress, the requested CIR in the metadata should satisfy: CIR < Ringress ~
Figure imgf000011_0001
where CIR is the requested CIR, Ringress is the packet ingress rate, and CIR[GS is the aggregated CIR of other previously admitted LGS flows. The second condition is that, for the router egress, the requested CIR in the metadata should satisfy: C ~ where CIR is
Figure imgf000012_0004
Figure imgf000012_0001
the requested CIR,
Figure imgf000012_0003
is the total allowable rate for all LGS flows (as dictated by hardware constraints), and
Figure imgf000012_0002
the aggregated CIR of other previously admitted LGS flows. The third condition is that adding the maximum latency generated by the current node to the total latency cannot result in a value that exceeds the requested latency deadline. [0039] Based on the forgoing, the node compares the total latency to the requested latency deadline. When the total latency is smaller than the deadline and adding the requested CIR does not exceed the configured total ingress or egress rates, then the admissible bit for the current hop is set to one (admitted) and the requested CIR is recorded at the node. The residual ingress and egress rates at the node are reduced by the requested CIR (indicating allocation for the flow). When the total latency is larger than the requested latency deadline and adding the requested CIR does not exceed the configured total ingress and egress rates, the admissible bit for the current hop is set to zero (not admissible because latency cannot be guaranteed). In all other cases, when the requested CIR exceeds the total ingress or egress rates, the admissible bit for the current hop is set to zero (not admitted) because CIR cannot be guaranteed. In this case the total allowable rate metadata is set to be the lowest CIR allowed for the current hop and all previous hops (if any).
[0040] Once the first node processes the QoS resource reservation request packet, allocates resources, and/or updates metadata at step 203, the first node transmits the QoS resource reservation request packet to the second node at step 204. At step 205, the second node performs substantially the same process as the first node at step 203. Specifically, the second node determines whether to admit the flow based on the requested parameters. The second node then allocates resources and/or updates the metadata to indicate the results. For example, the second node updates a second admissible bit to indicate whether the flow is admitted at the second node. The second node also updates total allowable rate when bandwidth at the second node does not meet the requested CIR and allocable bandwidth at the second node is less than the value currently in the total allowable rate metadata.
[0041] The second node then sends the QoS resource reservation request packet toward the third node at step 206. The third node and fourth node process the QoS resource reservation request packet in the same manner as the first and second node. Accordingly, step 207 and step 209 are substantially similar to step 203 and 205. Further, step 208 and step 210 are substantially similar to steps 202, 204, and 206. [0042] The destination receives the QoS resource reservation request packet from the fourth node as a result of step 210. The destination then processes the QoS resource reservation request packet and generates a QoS resource reservation response packet at step 211. For example, the destination can determine whether the flow was admitted by all nodes along the path (or similarly, whether all nodes can admit the flow). The destination can then add a successful (S) bit to the metadata in the QoS resource reservation response packet. For example, the S bit can be set to one to indicate the flow setup was a success at all nodes along the path or zero to indicate the flow setup was not successful at one or more nodes along the path. When the setup is successful, the destination may assign a priority to the flow and indicate the priority in the metadata in the QoS resource reservation response packet. The destination may also include a reverse path in the metadata of the QoS resource reservation response packet to ensure that the QoS resource reservation response packet traverses the same path as the QoS resource reservation request packet. In the event that flow setup is not successful, the destination may include feedback to the source as metadata in the QoS resource reservation request packet. For example, the destination can use the metadata from the QoS resource reservation request packet to determine parameters that are likely to result in an admitted flow and can then include such parameters in the metadata of the QoS resource reservation response packet. For example, the destination can include a recommended deadline based on the total latency from the QoS resource reservation request packet. In another example, the destination can include a recommended CIR based on the allowed CIR in the QoS resource reservation request packet. In this way, the QoS resource reservation response packet includes an indication of the parameters that are likely to result in a successful flow allocation in a further round of resource reservation.
[0043] In some examples, the QoS resource reservation request packet includes a contract clause that directs the nodes to allocate resources when the flow is admitted in response to the QoS resource reservation request packet. In such cases, the QoS resource reservation response packet includes a contract clause that directs the nodes to release such allocations when the flow admission is not successful for the entire path. In another example, the resources are not allocated by the QoS resource reservation request packet. In such a case, the QoS resource reservation response packet includes a contract clause that directs the nodes to allocate such resources when the flow admission is successful for the entire path.
[0044] Once the QoS resource reservation response packet is generated at step 211, the QoS resource reservation response packet is forwarded along the reverse path at step 212. Specifically, the QoS resource reservation response packet is transmitted to the fourth node, the third node, the second node, the first node, and the source at step 212, step 214, step 216, step 218, and step 222, respectively. Further, the QoS resource reservation response packet is processed at each node at step 213, step 215, step 217, and step 219. Specifically, the nodes either allocate resources based on the metadata, release resources based on the metadata, or make no change to the allocation based on the metadata, depending on the example.
[0045] At step 221, the source receives the QoS resource reservation response packet and determines whether flow setup was successful, for example based on the S bit. When flow setup was successful, the source can employ the metadata in the QoS resource reservation response packet to either begin sending or receiving the flow, depending on the example. However, when the flow setup is not successful, the source can generate a second QoS resource reservation request packet based on the metadata from the QoS resource reservation response packet. For example, the source can set the requested CIR in the second QoS resource reservation request packet based on the recommended CIR in the QoS resource reservation response packet. As another example, the source can set the requested latency deadline in the second QoS resource reservation request packet based on the recommended deadline in the QoS resource reservation response packet. The source can then forward the second QoS resource reservation request packet to the first node at step 224. The second QoS resource reservation request packet is then handled in a substantially similar manner to the first QoS resource reservation request packet. Accordingly, the source can continue to request flow admission using various parameters until the flow is admitted. Alternatively, the source can determine that resource allocation is not possible and either wait for the resources to become available or rely on best efforts communications for the flow. Accordingly, by employing mechanism 200, network resources along a path can be requested and/or allocated based on data plane signaling without relying on control plane signaling.
[0046] FIG. 3 is a schematic diagram of an example network element 300. The network element 300 is suitable for implementing the disclosed examples/embodiments as described herein. The network element 300 comprises downstream ports 320, upstream ports 350, and/or transceiver units (Tx/Rx) 310, including transmitters and/or receivers for communicating data upstream and/or downstream over a network. The network element 300 also includes a processor 330 including a logic unit and/or central processing unit (CPU) to process the data and a memory 332 for storing the data. The network element 300 may also comprise electrical, optical-to- electrical (OE) components, electrical-to-optical (EO) components, and/or wireless communication components coupled to the upstream ports 350 and/or downstream ports 320 for communication of data via electrical, optical, or wireless communication networks. The network element 300 may also include input and/or output (I/O) devices for communicating data to and from a user. The I/O devices may include output devices such as a display for displaying image and/or video data. The I/O devices may also include input devices, such as a keyboard, mouse, trackball, etc., and/or corresponding interfaces for interacting with such output devices.
[0047] The processor 330 is implemented by hardware and software. The processor 330 may be implemented as one or more CPU chips, cores (e.g., as a multi-core processor), field- programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and digital signal processors (DSPs). The processor 330 is in communication with the downstream ports 320, Tx/Rx 310, upstream ports 350, and memory 332. The processor 330 comprises a reservation module 314. The reservation module 314 implements the disclosed embodiments described herein, such as mechanism 200, method 800, method 900, and/or method 1000, which may employ a packet, such as a New IP packet 500, including resource reservation request metadata 600 and/or resource reservation response metadata 700. The PNG module 714 may also implement any other method/mechanism described herein. Further, the reservation module 314 may implement functionality in a device in network 100, such as in a client 101, a source 103, a node 105, a server 107, a destination 109, and/or a network node 400. Further, the reservation module 314 may implement a device in system 1100. For example, the reservation module 314 may employ in-band signaling to request a QoS based resource reservation, process a QoS based resource reservation request, and/or reply to a QoS based resource reservation request. For example, a reservation module 314 on a source may employ an LGS request packet to communicate service requirement parameters for one or more flows as metadata The reservation module 314 on a transit node may admit flows based on such metadata When operating on a destination, the reservation module 314 may determine whether the resource reservation was successful and send an LGS response packet back to the source on the reverse path to confirm or deny the resource reservation and/or communicate recommendations via relevant metadata. Hence, reservation module 314 causes the network element 300 to provide additional functionality, such as reserving network resources via data plane signaling without employing complex control plane protocols. Hence, the reservation module 314 allows for fast, dynamic, responsive, and scalable network resource reservations for network flows. As such, the reservation module 314 improves the functionality of the network element 300 as well as addresses problems that are specific to the image coding arts. Further, the reservation module 314 affects a transformation of the network element 300 to a different state. Alternatively, the reservation module 314 can be implemented as instructions stored in the memory 332 and executed by the processor 330 (e.g., as a computer program product stored on a non-transitory medium). [0048] The memory 332 comprises one or more memory types such as disks, tape drives, solid-state drives, read only memory (ROM), random access memory (RAM), flash memory, ternary content-addressable memory (TCAM), static random-access memory (SRAM), etc. The memory 332 may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution.
[0049] FIG. 4 is a schematic diagram of an example network node 400 configured to perform traffic classification. For example, the network node 400 may be implemented in a transit node 105 and/or a network element 300. Further, the network node 400 may be employed to implement the functionality of a node in mechanism 200. Specifically, the network node 400 is configured to implement a differentiated services (Diffserv) network architecture that is expanded to support LGS traffic and bandwidth guarantee service (BGS) where LGS traffic receives guaranteed latency and where BGS traffic receives a guaranteed CIR and hence a guaranteed bandwidth.
[0050] The network node 400 includes ingress ports 411 that receive upstream traffic at a rate of Ringress- The ingress ports 411 are an upstream interface between link(s) connected to the node 400 and the node’s 400 traffic processing hardware. The network node 400 also includes a traffic classifier 413 and priority queues 415. A traffic classifier 413 is a software and/or hardware component configured to sort traffic from the ingress ports 411 into traffic classes and assign such traffic into the priority queues 415 based on traffic class. The traffic can be sorted into traffic classes based on rules stored at the node 400 and/or information contained in the traffic packets. The priority queues 415 are implemented in memory and store the packets until the packets can be transmitted downstream. The network node 400 also includes a scheduler 417. The scheduler 417 is a software and/or hardware component configured to determine the order that packets in the priority queues 415 are transmitted downstream. The scheduler 417 prioritizes transmission of higher priority traffic over lower priority traffic based on the priority queues 415. The network node 400 also includes egress ports 419 that transmits traffic downstream at a rate of Regress . The egress ports 419 are a downstream interface between link(s) connected to the node 400 and the node’s 400 traffic processing hardware. It should also be noted that upstream indicates the direction of a source of relevant traffic and downstream indicates the direction of a destination of relevant traffic.
[0051] The Diffserv architecture uses a six bit differentiated services code point (DSCP) in the header of an IP packet to indicate the classification of the corresponding packet. For example, the traffic classifier 413 may assign traffic to the priority queues 415 based on the DSCP in each packet. According to the Diffserv architecture, the priority queues 415 include an expedited forwarding (EF) queue, a series of assured forwarding (AF) queues, and a best efforts (BE) queue. The EF queue is a low delay, low loss, and low jitter priority and is generally used for voice, video, or other real-time data communication services. The AF queues assure delivery of packets as long as the traffic does not exceed a rate that corresponds to an AF queue number. Accordingly, different AF queues provide different levels of priority, and hence provide diminishing levels of guarantees regarding latency, bandwidth, etc. The BE queue is the lower priority queue. As such, data in the BE queue is the most likely to be dropped in case of congestion. For example, the data in the BE queue is generally transmitted when all higher priority queues are consistently meeting the guarantees for their traffic. However, BE traffic can be held to ensure higher priority traffic guarantees can be met.
[0052] The network node 400 is also expanded to provide an LGS queue to prioritize LGS traffic and a BGS queue to prioritize CIR traffic. In node 400, the LGS queue is lower priority than the EF queue, but higher priority than all other queues. Further, the BGS queue is lower priority than the EF queue and the LGS queue, but higher priority than the AF queues and the BE queue. The following describes the functionality of the LGS queue and the BGS queue as well as network node’s 400 management of associated traffic.
[0053] LGS denotes a traffic class that provides accurate latency guarantee service. BGS denotes a traffic class that provides a deterministic bandwidth guarantee. Dependent on whether the EF class traffic is classified to be present, traffic classifier 413 may take two different approaches for queueing LGS and BGS traffic. The traffic classifier 413 categorizes the LGS and BGS traffic into queues for different classes. The scheduler 417 may employ a hybrid scheduling approach. For example, the scheduler 417 may use a Strict Priority Queue (SPQ) for the EF queue and the LGS queue. The scheduler 417 may also use a Deficit Weighted Round Robin (DWRR) for the BGS queue, AF queues, and the BE queue.
[0054] When the EF queue is empty and the LGS queue has the highest priority to be scheduled, the maximum latency for an LGS flow can be calculated as follows:
Figure imgf000017_0001
[0055] denotes the number of LGS packets that have arrived at a node at a specified instant while transmission is ongoing across an ingress link. Lmax is the maximum packet size in bits. is calculated in Equation (1) as the total maximum size of packets of the current LGS flow divided by the egress rate Regress . is calculated as shown in Equation (2).
Figure imgf000018_0005
Figure imgf000018_0006
is the aggregated ingress flow rate of the current presenting LGS flows, which can be approximated as the aggregated CIR. In Equation (3), a constant burst coefficient r represents the deviation of the real flow rate from the given CIRs of the already admitted LGS flows CIRLGS and the current LGS flow (total n number of LGS flows). For example, r can be one.
Figure imgf000018_0007
could be set to be
Figure imgf000018_0008
to compute the upper bound of latency for LGS flows, where
Figure imgf000018_0009
is the total allowed rate for all LGS flows for the ingress interface, which may be pre- configured by the router administrator and cannot exceed the rate of the ingress interface
Ringress .
[0056] When the EF class queue is not empty and the LGS queue has the second highest priority to be scheduled,
Figure imgf000018_0010
should account for the packet arrival of EF class flows, which is scheduled ahead of the LGS flows. So the LGS flows wait for the EF class packets to be put on the egress link, during which time there are number
Figure imgf000018_0011
of LGS packets. In this case,
Figure imgf000018_0012
can be calculated according to Equation (4).
Figure imgf000018_0001
[0057] An LGS flow is admissible to node 400 when the following three conditions are satisfied. First, for the ingress 411 the CIR carried in the metadata should satisfy: CIR ≤ Ringress Second, for the egress 419 the CIR carried in the metadata should
Figure imgf000018_0013
satisfy: —
Figure imgf000018_0002
Third, by adding the maximum latency generated at
Figure imgf000018_0014
Figure imgf000018_0015
current router, the Total_Latency should not exceed Deadline.
[0058] When a network node 400 receives a QoS resource reservation request packet, such as an LGS setup packet with a proposed contract clause and metadata configured in a New IP header, the network node 400 should perform actions as described by mechanism 200. The requested CIR may satisfy CIR ≤ Ringress ~
Figure imgf000018_0003
Figure imgf000018_0004
The network node 400 may then obtain the upper bound latency for requested LGS and add the upper bound latency to T otal_Latency . The network node 400 then compares Total_Latency with Deadline . When Total_Latency is smaller than Deadline and adding the requested CIR does not exceed the configured total ingress nor egress rate, the Admissible bit for the current hop may be set to be one, and the requested CIR is recorded. The residual ingress and egress rates are reduced by the requested CIR respectively. When T otal_Latency is larger than Deadline and adding the requested CIR does not exceed the configured total ingress and egress rate, the Admissible bit for the current hop is set to be zero. When adding the requested CIR does exceed the configured total ingress or egress rate then the Admissible bit for the current hop is set to be zero. The allowedCIR is set to be the lowest CIR that are allowed for the current hop and previous hops. After the above steps, the network node 400 forwards the packet to the next hop until the packet reaches the destination (e.g., server).
[0059] FIG. 5 is a schematic diagram of an example New IP packet 500 that may be employed to perform data plane signaling to reserve network resources. For example, the New IP packet 500 may be configured to contain the metadata to support signaling in mechanism 200. As such, the New IP packet 500 can be routed into priority queues 415 in network node 400. Further, the New IP packet 500 can be transmitted between a source 103 and a destination 109 via transit nodes 105 in network 100. In addition, the New IP packet 500 can be received, processed, and transmitted by network element 300.
[0060] The New IP protocol is an IP based data plane protocol designed to support machine to machine communication and enhance user experience. The New IP protocol uses a New IP packet 500 to perform these features. Specifically, a New IP packet 500 can be used to flexibly program network nodes via data plane signaling to provide to specified handling of a current packet, a current flow, and/or a set of current flows without relying on the control plane to setup such handling ahead of time. As such, New IP can coexist with traditional IP network architecture while providing additional functionality.
[0061] The New IP packet 500 comprises a header 501, a shipping specification 503, a contract 505, and a payload 507. The New IP packet 500 has a variable length. Accordingly, the header 501 includes a series of offsets that act as pointers. A router receiving aNew IP packet 500 can review the offsets in the header 501 to determine the location of corresponding data in the New IP packet 500. For example, the header 501 may comprise a shipping pointer, a contract pointer, and a payload pointer that point to the starting bits of the shipping specification 503, the contract 505, and the payload 507, respectively.
[0062] The shipping specification 503 indicates the source address and destination address. Specifically, the shipping specification 503 employs a flexible addressing scheme. The shipping specification 503 contains an address type field indicating the addressing format used. The shipping specification 503 also contains one or more address cast field(s) indicating the nature of the communication, such as one to one, many to one, anycast, multicast, broadcast, coordinating casting, etc. The shipping specification 503 also contains the source address, the destination address, and data indicating the lengths of the addresses.
[0063] The contract 505 is used to dynamically program each node to provide requested services to packets and/or flows. Traditional IP networks attempt to guarantee service levels for users and/or flows in the aggregate, but can only retroactively determine whether such guarantees were actually met. The contract 505 directs the node to treat the current packet in a prescribed manner, and hence can ensure that guarantees are met in real time on a packet by packet basis. The contract 505 includes one or more contract clauses 510 and optionally metadata 511. A contract clause 510 can include one or more of an event, a condition, and an action. An event is used to describe a network event related to a condition and/or an action. Such network events may include specified packet queue levels at a node, path outages, a late packet, a packet drop, a specified next hop, a packet count, etc. Conditions are arithmetic and/or logical operators used to perform checks. Such conditions may include equals, less than, greater than or equal to, or, and, exclusive or (xor), etc. An action is a step that a node should take, for example in response to an event and/or condition. Actions can include a wide range of network node functionality such as reporting events, dynamic routing changes, and/or packet prioritization. Metadata 511 is any data that describes other data. The payload 507 is the actual data transmitted from a source to a destination across the data plane.
[0064] The presently described mechanisms can be implemented by employing the contract clauses 510 and metadata 511 to reserve network resources using data plane signaling. For example, a contract clause 510 can be used to reserve network resources (action) if/ when (condition) requested latency and/or requested CIR can be accommodated (event). In another example, the contract clause 510 can be used to release reserved network resources (action) if/when (condition) a flow is not admissible for an entire path (event). In yet another example, the contract clause 510 can be used to reserved network resources (action) if/when (condition) a flow is admissible for an entire path (event). Hence, the New IP Packet 500 can be employed to implement a QoS resource reservation request packet and/or a QoS resource reservation response packet.
[0065] The metadata 511 can be used to store parameters relating to the event. For example, the metadata 511 can indicate the requested latency and/or CIR, latency and/or allowed CIR from previous hops, and/or suggested latency and/or suggested CIR for future requests. In an example, metadata 511 that can be employed as part of a QoS resource reservation request packet and/or a QoS resource reservation response packet as described in more detail below. [0066] It will be understood from the disclosure that similar conditional directives, metadata, and other features can be provided using protocols other than New IP. Thus, the scope of the present disclosure should not be limited to New IP.
[0067] FIG. 6 illustrates an example of resource reservation request metadata 600 that can be employed to request that network resources be reserved via data plane signaling. For example, the metadata 600 can be employed in a metadata 511 in a New IP packet 500. Accordingly, the metadata 600 can be used to implement mechanism 200 in a source 103, a transit node 105, a destination 109, a network element 300, and/or a network node 400.
[0068] The metadata 600 can be used to support a request to reserve bandwidth and/or to request a latency guarantee along a path between a source and a destination. For example, the metadata can be included in any type of QoS resource reservation request packet, such as an LGS request packet. The metadata 600 comprises the following fields: a flow identification method (FlowIdMethod) 601, a hop number (Hop num) 603, a unit 605, a total maximum latency (Total Max Latency) 607, a requested deadline (Deadline) 609, a CIR 611, an allowed CIR (allowedCIR) 613, and Admissible 615.
[0069] The FlowIdMethod 601 can be used to denote a method of identifying the flow for example as an individual flow (e.g., 0), Transmission Control Protocol (TCP) flows (e.g., 1), User Datagram Protocol (UDP) flows (e.g., 2), all flows (e.g., 3), and flows with the same DSCP bits (e.g., 4). An individual flow may be a non-IPSec flow or an IPsec flow. A non-IPSec individual flow may be identified by source and destination address, source and destination port number, and protocol number. An IPSec individual flow can be identified by source and destination address and a flow label. End-to-end latency can be guaranteed for a dedicated IP flow. TCP flows can be identified by source and destination address and TCP protocol number. End-to-end latency can be guaranteed for all TCP flows that have the same source and destination address. UDP flows can be identified by source and destination address and UDP protocol number. End-to-end latency can be guaranteed for all UDP flows that have the same source and destination address. All flows can be identified by source and destination address. End-to-end latency can be guaranteed for all IP flows that have the same source and destination address. DSCP flows are flows that have been assigned DSCP bits according to the Diffserv network architecture as described relative to network node 400. A DSCP flow can be identified by the DSCP bits. End-to-end latency can be guaranteed for all IP flows that have the same DSCP bits. [0070] Hop num 603 can be used to indicate the total number of hops on the path from the source/client to the destination/server. The Hop_num 603 should be decremented (or incremented depending on the example) at each hop after processing. The unit 605 can indicate the unit of latency, for example zero for millisecond (ms) or one for microseconds (us). The Total Max Latency 607 can indicate the maximum value of the total latency accumulated along the path. Total Max Latency 607 can be increased at each intermediate node by the maximum per-hop latency that is estimated by the corresponding intermediate node. The Deadline 609 can indicate the upper bound of the end-to-end latency as requested by the source/client. The CIR 611 can indicate the CIR in bits per second (bps) as requested by the client. The allowedCIR 613 can indicate the CIR that can be allowed by the path. The allowedCIR 613 can be determined by considering the maximum CIR that can be allowed at each router and selecting the minimum value among such maximum values. This can be determined by having each node that fails to admit the flow determine that node’s maximum allowable CIR. The node can then set the allowedCIR 613 to that node’s maximum allowable CIR when that nodes maximum allowable CIR is lower than the value currently stored in the allowedCIR 613. The Admissible 615 can include a number of bits equal to the number of hops along the path between a source and destination. Each bit of Admissible 615 can be set to indicate whether the flow can be admitted by a corresponding intermediate node. The first hop can be indicated by the most significant bit and the last hop can be indicated by the least significant bit. As discussed above, variations on this set of metadata are also possible, and different fields could be added or omitted.
[0071] FIG. 7 illustrates an example of resource reservation response metadata 700 that can be employed to communicate a result of a resource reservation request via data plane signaling. For example, the metadata 700 can be employed in a metadata 511 in a New IP packet 500. Accordingly, the metadata 700 can be used to implement mechanism 200 in a source 103, a transit node 105, a destination 109, a network element 300, and/or a network node 400. For example, the metadata 700 can be included in a QoS resource reservation response packet in response to a QoS resource reservation request packet containing metadata 600.
[0072] The metadata 700 can be used to report the results of a request to reserve bandwidth and/or a request to provide a latency guarantee along a path between a source and a destination. For example, the metadata can be included in any type of QoS resource reservation response packet, such as an LGS response packet, sent from a destination back to a source along a reverse path to indicate the results of a resource reservation request. The metadata 700 comprises the following fields: a successful (S) 701, a flow priority (FlowPriority) 703, a reverse path (ReversePath) 705, arecommended deadline (RecommendedDeadline) 707, and arecommended CIR (RecommendedCIR) 709.
[0073] The S 701 can be used to indicated whether the LGS setup and/or the CIR setup is successful or not. For example, S 701 can be set to true when the flow was admitted to all nodes along the path at the requested parameters and false when the flow was not admitted by at least one node along the path. The FlowPriority 703 can be used to notify the source/client of the priority assigned to the flow. For example, the FlowPriority 703 can indicate a DSCP of the flow when S 701 is true. In another example, the FlowPriority 703 can include or otherwise point to a New IP header that indicates such a priority. As an example, the LGS flows may be marked with a DSCP value that specifies an LGS priority as described with respect to priority queues 415 in network node 400. The DSCP value ensures that the LGS flows have the highest priority to be scheduled at the egress (e.g., after EF). The FlowPriority 703 can be omitted when S 701 is false. The ReversePath 705 can be used to record the path back to the client/source, which follows the reverse path from the client/source to the server/destination. Accordingly, the ReversePath 705 can be used to ensure that the QoS resource reservation response packet is received by each of the nodes that previously handled the QoS resource reservation request packet. The RecommendedDeadline 707 may be used to indicate the value of Total Max Latency 607 from the QoS resource reservation request packet/ LGS setup packet when S 701 is false. Accordingly, the RecommendedDeadline 707 indicates to the source/client a latency deadline that is likely to be acceptable for a further request. When S 701 is true, the RecommendedDeadline 707 is not needed (as the resource allocation for the flow is already successfully setup) and can be omitted. The Recommend edCIR 709 is used to indicate the value of allowedCIR 613 from the QoS resource reservation request packet/ LGS setup packet when S 701 is false. Accordingly, the RecommendedCIR 709 indicates to the source/client a CIR value that is likely to be acceptable for a further request. When S 701 is true, the RecommendedCIR 709 is not needed (as the resource allocation for the flow is already successfully setup) and can be omitted. As in Figure 6, variations on this set of metadata are also possible, and different fields could be added or omitted.
[0074] FIG. 8 is a flowchart of an example method 800 of requesting reservation of network resources at a source via data plane signaling. For example, method 800 may be employed to implement mechanism 200 in a source 103, a client 101, a server 107, a transit node 105, a network element 300, and/or a network node 400. Further, method 800 may employ one or more New IP packets 500, for example to communicate resource reservation request metadata 600 and/or resource reservation response metadata 700. Method 800 may be implemented by a client/source that determines to request a reservation of network resources. Accordingly, the method 800 is described from the perspective of the client/source for clarity of discussion. However, corresponding packets are communicated across a network, and hence a transit node may also implement method 800 in some examples. [0075] Method 800 begins when a source/client determines to request that network resources be allocated for communicating one or more flows along a path between the source/client and a destination/server. At step 801, the source transmits a QoS resource reservation request packet along the path to the destination via data plane signaling. The QoS resource reservation request packet requests QoS resources be provisioned to guarantee certain minimum service be applied when a flow is communicated along the path. In one example, the QoS resource reservation request packet is a CIR/bandwidth request packet that requests a CIR be applied by each node along the path to the destination. Hence, the QoS resource reservation request packet can request a minimum CIR, bandwidth, and/or bit rate be made available by a path for handling a flow. In another example, the QoS resource reservation request packet may be an LGS request packet that requests an LGS with a total latency along the path that is equal to or smaller than a deadline. Hence, the QoS resource reservation request packet can request that a path communicate a flow while ensuring that each packet is communicated between the client/source and the destination/server with latency that is less than or equal to the deadline. In some examples, the QoS resource reservation request packet is referred to as a LGS request packet and includes both a CIR/bandwidth request and an LGS request.
[0076] In an example, the QoS resource reservation request packet may include one or more of resource reservation request metadata 600. For example, the QoS resource reservation request packet may comprise metadata including a CIR field containing a CIR requested indicating the CIR requested by the source and an allowed CIR field indicating a CIR continuously allowable along the path. For example, the CIR field can include a value set by the source and the allowed CIR field can be set to a default value (e.g., set equal to the CIR requested) and updated by transit nodes as the packet traverses the path to the destination. Such updating may include reducing the value in the allowed CIR field to a maximum value of allocable CIR at a corresponding node when the corresponding node is not capable of providing the entire CIR requested in order to determine the maximum value of continuously allocable CIR along the entire path. Hence, the CIR field and the allowed CIR field can be used to implement the C I R/bandwid th/bit rate resource reservation along the path.
[0077] In another example, the QoS resource reservation request packet comprises metadata containing a deadline field indicating a requested upper bound for an end-to-end latency between the source and the destination and a total maximum latency field indicating an accumulated latency along the path. For example, the deadline field can include a value set by the source and the total maximum latency field can be set to a default value (e.g., set equal zero) and updated by transit nodes as the packet traverses the path to the destination. Such updating may include increasing the value in the total maximum latency field at each node by an amount of latency that the corresponding node projects will be added by the node to a flow packet communicated via the node. Hence, the deadline field and the total maximum latency field can be used to reserve resources to implement the latency guarantee along the path. In some examples, the QoS resource reservation request packet includes the CIR field, the allowed CIR field, the deadline field, and the total maximum latency field, and hence reserves resources for both CIR and LGS guarantees.
[0078] The QoS resource reservation request packet may also contain additional metadata. For example, the QoS resource reservation request packet may contain metadata including an admissible field indicating whether the flow is admitted at each hop along the path. For example, the admissible field may include a bit for each transit node along the path between the source and the destination. Each node can set a corresponding bit to zero/one, true/false, etc., to indicate whether the corresponding node can admit the flow based on the terms requested by other metadata The QoS resource reservation request packet may also contain other metadata, such as an indication of the type of flow(s), a hop counter, a unit of latency, etc., as described with respect to resource reservation request metadata 600.
[0079] At step 803, the source receives, a QoS resource reservation response packet indicating results of the QoS resource reservation request packet. For example, the results of the resource reservation request may be contained in metadata in the QoS resource reservation response packet. In an example, the QoS resource reservation response packet contains metadata including an S field indicating whether LGS setup is successful for the flow. For example, the S field can be set by the destination as one/zero, true/false, etc., to indicate whether all of the requested values can be supported by all of the nodes along the path, and hence to indicate whether the entire path can support the requested values. Further, the S field can indicate whether all nodes along the path admitted the flow and/or whether such nodes have allocated corresponding resources for the flow. The QoS resource reservation response packet may contain metadata including a flow priority field that indicates a priority for the flow. Such a flow priority can be assigned by the destination. For example, when the Diffserv model is employed, the flow priority may be set to LGS and/or BGS. The flow priority field may only be included when the flow setup is successful. In another example, the QoS resource reservation response packet contains metadata including a reverse path field indicating a reverse path from the destination to the source (e.g., to ensure the QoS resource reservation response packet traverses the same set of nodes that handled the QoS resource reservation request packet and releases corresponding resources when flow setup is not successful). In another example, the QoS resource reservation response packet contains metadata including a recommended deadline field indicating a recommended deadline set based on a value of the total maximum latency in the QoS resource reservation request packet at the destination. The recommended deadline field may only be included when the flow setup is not successful. In another example, the QoS resource reservation response packet may contain metadata including a recommended CIR field indicating a recommended CIR set based on a value of the allowed CIR in the QoS resource reservation request packet at the destination.
[0080] Step 805 is an optional step that may be omitted when flow setup is successful. Specifically, the source may employ step 805 to make another attempt to set up the flow based on feedback from the QoS resource reservation response packet. At step 805, the source may transmit a second QoS resource reservation request packet with a CIR requested by the source set based on the recommended CIR in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful. Further, the second QoS resource reservation request packet may comprise a deadline set based on the recommended deadline in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful. The source can then wait and receive a second QoS resource reservation response packet in response to the second QoS resource reservation request packet.
[0081] As discussed above, the first and/or second QoS resource reservation request packet and the first and/or second QoS resource reservation response packet may each be included in a corresponding New IP packet.
[0082] FIG. 9 is a flowchart of an example method 900 of performing resource reservation at a destination by employing data plane signaling. For example, method 900 may be implemented at a destination/server in response to the packets communicated in method 800. Accordingly, method 900 may be employed to implement mechanism 200 in a destination 109, a client 101, a server 107, atransit node 105, anetwork element 300, and/or a network node 400. Further, method 900 may employ one or more New IP packets 500, for example to communicate resource reservation request metadata 600 and/or resource reservation response metadata 700. Method 900 may be implemented by a destination/server that determines whether a request to reserve network resources has been successful. Accordingly, the method 900 is described from the perspective of the destination/server for clarity of discussion. However, corresponding packets are communicated across a network, and hence a transit node may also implement method 900 in some examples. [0083] Method 900 begins when a network element, acting as a transit node and/or destination, receives a request to reserve network resources to support a CIR and/or LGS. At step 901, the network element receives a QoS resource reservation request packet from a source via data plane signaling along a path. The QoS resource reservation request packet requests QoS resources be provisioned to guarantee certain minimum service be applied when a flow is communicated along the path. In one example, the QoS resource reservation request packet is a CIR/bandwidth request packet that requests a CIR be applied by each node along the path to the destination. Hence, the QoS resource reservation request packet can request a minimum CIR, bandwidth, and/or bit rate be made available by a path for handling a flow. In another example, the QoS resource reservation request packet may be an LGS request packet that requests an LGS with a total latency along the path that is equal to or smaller than a deadline. Hence, the QoS resource reservation request packet can request that a path communicate a flow while ensuring that each packet is communicated between the client/source and the destination/server with latency that is less than or equal to the deadline. In some examples, the QoS resource reservation request packet is referred to as an LGS request packet and includes both a CIR/bandwidth request and an LGS request.
[0084] In an example, the QoS resource reservation request packet may include one or more of resource reservation request metadata 600. For example, the QoS resource reservation request packet may comprise metadata including a CIR field containing a CIR requested indicating the CIR requested by the source and an allowed CIR field indicating a CIR continuously allowable along the path. For example, the CIR field can include a value set by the source and the allowed CIR field can be set to a default value (e.g., set equal to the CIR requested) and updated by transit nodes as the packet traverses the path to the destination. Such updating may include reducing the value in the allowed CIR field to a maximum value of allocable CIR at a corresponding node when the corresponding node is not capable of providing the entire the CIR requested in order to determine the maximum value of continuously allocable CIR along the entire path. Hence, the CIR field and the allowed CIR field can be used to implement the C I R/bandwid th/bit rate resource reservation along the path.
[0085] In another example, the QoS resource reservation request packet comprises metadata containing a deadline field indicating a requested upper bound for an end-to-end latency between the source and the destination and a total maximum latency field indicating an accumulated latency along the path. For example, the deadline field can include a value set by the source and the total maximum latency field set to a default value (e.g., set equal zero) and updated by transit nodes as the packet traverses the path to the destination. Such updating may include increasing the value in the total maximum latency field at each node by an amount of latency that the corresponding node projects will be added by the node to a flow packet communicated via the node. Hence, the deadline field and the total maximum latency field can be used to reserve resources to implement the latency guarantee along the path. In some examples, the QoS resource reservation request packet includes the CIR field, the allowed CIR field, the deadline field, and the total maximum latency field, and hence reserves resources for both CIR and LGS guarantees.
[0086] The QoS resource reservation request packet may also contain additional metadata. For example, the QoS resource reservation request packet may contain metadata including an admissible field indicating whether the flow is admitted at each hop along the path. For example, the admissible field may include a bit for each transit node along the path between the source and the destination. Each node can set a corresponding bit to zero/one, true/false, etc., to indicate whether the corresponding node can admit the flow based on the terms requested by other metadata The QoS resource reservation request packet may also contain other metadata, such as an indication of the type of flow(s), a hop counter, a unit of latency, etc., as described with respect to resource reservation request metadata 600.
[0087] The network element can process the QoS resource reservation request packet. For example, when the network element is a destination, the network element determines whether the flow is admissible along the entire path based on the admissible field and generates a QoS resource reservation response packet based on the QoS resource reservation request packet. In another example, when the network element is a transit node, the network element updates relevant metadata and receives a responsive QoS resource reservation response packet from the destination. In either case, the network element transmits a QoS resource reservation response packet toward the source indicating results of the QoS resource reservation request packet. [0088] In an example, the QoS resource reservation response packet contains metadata including an S field indicating whether LGS setup is successful for the flow. For example, the S field can be set by the destination as one/zero, true/false, etc., to indicate whether all of the requested values can be supported by all of the nodes along the path, and hence to indicate whether the entire path can support the requested values. Further, the S field can indicate whether all nodes along the path admitted the flow and/or whether such nodes have allocated corresponding resources for the flow. The QoS resource reservation response packet may contain metadata including a flow priority field that indicates a priority for the flow. Such a flow priority can be assigned by the destination. For example, when the Diffserv model is employed, the flow priority may be set to LGS and/or BGS. The flow priority field may only be included when the flow setup is successful. In another example, the QoS resource reservation response packet contains metadata including a reverse path field indicating a reverse path from the destination to the source (e.g., to ensure the QoS resource reservation response packet traverses the same set of nodes that handled the QoS resource reservation request packet and releases corresponding resources when flow setup is not successful). In another example, the QoS resource reservation response packet contains metadata including a recommended deadline field indicating a recommended deadline set based on a value of the total maximum latency in the QoS resource reservation request packet at the destination. The recommended deadline field may only be included when the flow setup is not successful. In another example, the QoS resource reservation response packet may contain metadata including a recommended CIR field indicating a recommended CIR set based on a value of the allowed CIR in the QoS resource reservation request packet at the destination.
[0089] Step 905 is an optional step that may be omitted when flow setup is successful. However, when flow setup is not successful, the source may make another attempt to setup the flow based on parameters/feedback from the QoS resource reservation response packet. At step 905, the network element may receive a second QoS resource reservation request packet with a CIR requested by the source set based on the recommended CIR in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful. Further, the second QoS resource reservation request packet may comprise a deadline set based on the recommended deadline in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful. When the network element is a destination, the network element may generate and send a second QoS resource reservation response packet based on the second QoS resource reservation request packet. When the network element is a transit node, the network element may receive and forward a second QoS resource reservation response packet from the destination based on the second QoS resource reservation request packet.
[0090] As discussed above, the first and/or second QoS resource reservation request packet and the first and/or second QoS resource reservation response packet may each be included in a corresponding New IP packet.
[0091] FIG. 10 is a flowchart of an example method 1000 of performing resource reservation at an intermediate network node and/or a destination by employing data plane signaling. Method 1000 is an example implementation of the packet handling functions of method 900. For example, method 1000 may be implemented at a destination/server in response to the packets communicated in method 800. Accordingly, method 1000 may be employed to implement mechanism 200 in a destination 109, a client 101, a server 107, a transit node 105, a network element 300, and/or a network node 400. Further, method 1000 may employ one or more New IP packets 500, for example to communicate resource reservation request metadata 600 and/or resource reservation response metadata 700. Method 1000 may be implemented by a destination/server that determines whether a request to reserve network resources has been successful. Accordingly, the method 1000 is described from the perspective of the destination/server for clarity of discussion. However, corresponding packets are communicated across a network, and hence a transit node may also implement method 1000 in some examples. [0092] Method 1000 begins when a network element, acting as a transit node and/or destination, receives a request to reserve network resources to support a CIR and/or LGS. At step 1001, the network element receives an LGS resource reservation request packet via data plane signaling between a source and a destination. The LGS resource reservation request packet requests provisioning of network element resources to guarantee an end-to-end latency for a flow. The network element then processes the LGS resource reservation request packet.
[0093] At step 1003, the network element determines a total latency for the LGS resource reservation request packet by adding a maximum latency generated by the network element plus an accumulated total maximum latency metadata from the LGS resource reservation request packet. At step 1005, the network element also determines deadline metadata from the LGS resource reservation request packet. The data from steps 1003 and 1005 provide information to support a determination of whether the flow can be admitted while providing LGS.
[0094] At step 1007, the network element also determines a requested CIR metadata from the LGS resource reservation request packet. At step 1009, the network element also determines a total ingress rate and a total egress rate at the network element to support a determination of whether the flow can be admitted while providing CIR/bandwidth guarantees.
[0095] At step 1011, the network element determines whether to admit the flow from a CIR perspective based on a comparison of the requested CIR metadata, the total ingress rate, and the total egress rate. The network element also determines whether to admit the flow from an LGS perspective based on a comparison of the total latency and the deadline metadata. For example, the network can admit the flow when the total latency for the LGS resource reservation request is less than or equal to the deadline metadata, when the requested CIR metadata does not exceed the total allocable ingress rate, and when the requested CIR metadata does not exceed the total allocable egress rate. The network element can also allocate resources for the flow when the flow is admitted. The network element can also set/update the metadata in the LGS resource reservation request packet based on the admission decision. For example, the network element can set an admissible metadata in the LGS resource reservation request packet to an admitted value when the flow is admitted. Further, the network element can set an admissible metadata in the LGS resource reservation request packet to a denied value when the total latency for the LGS resource reservation request is greater than the deadline metadata. In addition, the network element can set an allowed CIR metadata in the LGS resource reservation request to a lowest allowed CIR value along a path between the source and the network element when the requested CIR metadata exceeds the total ingress rate or the total egress rate. For example, the network element can set the allowed CIR metadata to the allowed CIR value at the network element when such a value is lower than the value already contained in the allowed CIR metadata. The network element can also update the accumulated total maximum latency metadata from the LGS resource reservation request packet with the total latency as determined in step 1003.
[0096] The network element can forward the LGS resource reservation request packet toward the destination at step 1013. For example, when the network element is a transit node, the LGS resource reservation request packet can be transmitted toward the destination. When the network element is the destination, the LGS resource reservation request packet can be forwarded toward the relevant components for flow setup.
[0097] At step 1015, the network element can receive and/or generate a LGS response packet indicating results of the LGS resource reservation request packet, depending on whether the network element is a transit node or a destination. When the LGS response packet indicates LGS setup is not successful, the network element can release any resources allocated for the flow and transmit the LGS response packet toward the source. When the network element is the destination, the network element can generate the LGS response packet by setting a S field based on the admissible field in the LGS resource reservation request packet. When the network element is the destination, the network element can also set the RecommendedDeadline and RecommendedCIR based on the Total Max Latency and allowedCIR, respectively, from the LGS resource reservation request packet when the flow is not admissible. When the network element is the destination, the network element can also set a FlowPriority in the LGS response packet when the flow is admissible. When the network element is the destination, the network element can also set a ReversePath to ensure the LGS response packet traverses the same set of nodes as the LGS resource reservation request packet, and hence de-allocates resources when the flow is not admissible. In some examples, the LGS response packet allocates resources upon successful flow admission instead of allocating resources in response to the LGS resource reservation request packet in order to avoid allocating resources until flow admission is completed. [0098] FIG. 11 is a schematic diagram of an example system 1100 for performing resource reservation by employing data plane signaling, for example according to mechanism 200, method 800, method 900, and/or method 1000. The system 1100 may be implemented on a client 101, a source 103, a node 105, a server 107, a destination 109, a network element 300, and/or a network node 400. For example, the system 1100 can generate, transmit, process, and/or receive a packet, such as a New IP packet 500, including resource reservation request metadata 600 and/or resource reservation response metadata 700.
[0099] The system 1100 may be implemented as a source requesting a resource reservation. In such a case, the source includes a storing module 1105 for storing a QoS resource reservation request packet and/or a QoS resource reservation response packet. The source also includes a transmitting module 1107 for transmitting a QoS resource reservation request packet along a path to a destination via data plane signaling, wherein the QoS resource reservation request packet requests QoS resources be provisioned for a flow. The source also includes a receiving module 1101 for receiving a QoS resource reservation response packet indicating results of the QoS resource reservation request packet. The system 1100 may be further configured to perform any of the steps of method 800.
[00100] The system 1100 may be implemented as a node receiving a resource reservation request, such as a transit node and/or a destination. In such a case, the node includes a storing module 1105 for storing a QoS resource reservation request packet and/or a QoS resource reservation response packet. The node also includes a receiving module 1101 for receiving a QoS resource reservation request packet from a source via data plane signaling along a path, wherein the QoS resource reservation request packet requests QoS resources be provisioned for a flow. The node also includes a transmitting module 1107 for transmitting a QoS resource reservation response packet indicating results of the QoS resource reservation request packet toward the source. The system 1100 may be further configured to perform any of the steps of method 900.
[00101] The system 1100 may be implemented as a node receiving a resource reservation request, such as a transit node and/or a destination. In such a case, the node includes a storing module 1105 for storing a QoS resource reservation request packet and/or a QoS resource reservation response packet. The node also includes a receiving module 1101 for receiving a LGS resource reservation request packet via data plane signaling between a source and a destination, wherein the LGS resource reservation request packet requests provisioning of network element resources to guarantee an end-to-end latency for a flow. The node also includes a determining module 1103 for determining a total latency for the LGS resource reservation request packet by adding a maximum latency generated by the network element plus an accumulated total maximum latency metadata from the LGS resource reservation request packet, determining a deadline metadata from the LGS resource reservation request packet, and determining whether to admit the flow based on a comparison of the total latency and the deadline metadata. The node also includes a transmitting module 1107 for transmitting the LGS resource reservation request packet toward the destination. The system 1100 may be further configured to perform any of the steps of method 1000.
[00102] A first component is directly coupled to a second component when there are no intervening components, except for a line, a trace, or another medium between the first component and the second component. The first component is indirectly coupled to the second component when there are intervening components other than a line, a trace, or another medium between the first component and the second component. The term “coupled” and its variants include both directly coupled and indirectly coupled. The use of the term “about” means a range including ±10% of the subsequent number unless otherwise stated.
[00103] It should also be understood that the steps of the exemplary methods set forth herein are not necessarily required to be performed in the order described, and the order of the steps of such methods should be understood to be merely exemplary. Likewise, additional steps may be included in such methods, and certain steps may be omitted or combined, in methods consistent with various embodiments of the present disclosure.
[00104] While several embodiments have been provided in the present disclosure, it may be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
[00105] In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, components, techniques, or methods without departing from the scope of the present disclosure. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and may be made without departing from the spirit and scope disclosed herein.

Claims

CLAIMS What is claimed is:
1. A method implemented by a source, the method comprising: transmitting, by a transmitter of the source, a quality of service (QoS) resource reservation request packet along a path to a destination via data plane signaling, wherein the QoS resource reservation request packet requests QoS resources be provisioned for a flow; and receiving, by a receiver of the source, a QoS resource reservation response packet indicating results of the QoS resource reservation request packet.
2. The method of claim 1, wherein the QoS resource reservation request packet requests a committed information rate (CIR) from each node along the path to the destination.
3. The method of any of claims 1-2, wherein the QoS resource reservation request packet requests a latency guarantee service (LGS) with a total latency along the path that is smaller than a deadline.
4. The method of any of claims 1-3, wherein the QoS resource reservation request packet comprises metadata including a CIR field containing a CIR requested by the source and an allowed CIR field indicating a CIR continuously allowable along the path.
5. The method of any of claims 1-4, wherein the QoS resource reservation request packet comprises metadata containing a deadline field indicating a requested upper bound for an end- to-end latency between the source and the destination and a total latency field indicating an accumulated latency along the path.
6. The method of any of claims 1-5, wherein the QoS resource reservation request packet contains metadata including an admissible field indicating whether the flow is admitted at each hop along the path.
7. The method of any of claims 1-6, wherein the QoS resource reservation response packet contains metadata including a successful (S) field indicating whether LGS setup is successful for the flow.
8. The method of any of claims 1-7, wherein the QoS resource reservation response packet contains metadata including a flow priority field indicating a priority for the flow.
9. The method of any of claims 1-8, wherein the QoS resource reservation response packet contains metadata including a reverse path field indicating a reverse path, the reverse path being from the destination to the source.
10. The method of any of claims 1 -9, wherein the QoS resource reservation response packet contains metadata including a recommended deadline field indicating a recommended deadline.
11. The method of any of claims 1-10, wherein the QoS resource reservation response packet contains metadata including a recommended CIR field indicating a recommended CIR.
12. The method of any of claims 1-11, wherein the QoS resource reservation request packet and the QoS resource reservation response packet are New Internet Protocol (New IP) packets.
13. The method of any of claims 1-12, further comprising transmitting, by the transmitter, a second QoS resource reservation request packet with a CIR requested by the source set based on the recommended CIR in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful.
14. The method of any of claims 1-13, wherein the second QoS resource reservation request packet comprises a deadline set based on the recommended deadline in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful.
15. A method implemented by a network element, the method comprising: receiving, by a receiver of the network element, a quality of service (QoS) resource reservation request packet from a source via data plane signaling along a path, wherein the QoS resource reservation request packet requests QoS resources be provisioned for a flow; and transmitting, by a transmitter of the network element, a QoS resource reservation response packet indicating results of the QoS resource reservation request packet toward the source.
16. The method of claim 15, wherein the QoS resource reservation request packet requests a committed information rate (CIR) from each node along the path from the source.
17. The method of any of claims 15-16, wherein the QoS resource reservation request packet requests a latency guarantee service (LGS) with a total latency along the path that is smaller than a deadline.
18. The method of any of claims 15-17, wherein the QoS resource reservation request packet comprises metadata including a CIR field containing a CIR requested by the source and an allowed CIR field indicating a CIR continuously allowable along the path.
19. The method of any of claims 15-18, wherein the QoS resource reservation request packet comprises metadata containing a deadline field indicating a requested upper bound for an end-to-end latency between the source and the destination and a total latency field indicating an accumulated latency along the path.
20. The method of any of claims 15-19, wherein the QoS resource reservation request packet contains metadata including an admissible field indicating whether the flow is admitted at each hop along the path.
21. The method of any of claims 15-20, wherein the QoS resource reservation response packet contains metadata including a successful (S) field indicating whether LGS setup is successful for the flow.
22. The method of any of claims 15-21, wherein the QoS resource reservation response packet contains metadata including a flow priority field indicating a priority for the flow.
23. The method of any of claims 15-22, wherein the QoS resource reservation response packet contains metadata including a reverse path field indicating a reverse path from the destination to the source.
24. The method of any of claims 15-23, wherein the QoS resource reservation response packet contains metadata indicating a recommended deadline set based on a value of the total latency in the QoS resource reservation request packet at the destination.
25. The method of any of claims 15-24, wherein the QoS resource reservation response packet contains metadata including a recommended deadline field indicating a recommended CIR set based on a value of the allowed CIR in the QoS resource reservation request packet at the destination.
26. The method of any of claims 15-25, wherein the QoS resource reservation request packet and the QoS resource reservation response packet are New Internet Protocol (New IP) packets.
27. The method of any of claims 15-26, further comprising receiving, by the receiver, a second QoS resource reservation request packet with a CIR requested by the source set based on the recommended CIR in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful.
28. The method of any of claims 15-26, wherein the second QoS resource reservation request packet comprises a deadline set based on the recommended deadline in the QoS resource reservation response packet when the QoS resource reservation response packet indicates that setup is not successful.
29. A method implemented by a network element, the method comprising: receiving, by a receiver of the network element, a latency guarantee service (LGS) resource reservation request packet between a source and a destination, wherein the LGS resource reservation request packet requests provisioning of network element resources to guarantee an end-to-end latency for a flow; determining, by one or more processors of the network element, a total latency for the LGS resource reservation request packet by adding a latency generated by the network element plus an accumulated total latency metadata from the LGS resource reservation request packet; updating the accumulated total latency metadata from the LGS resource reservation request packet with the total latency; and forwarding the LGS resource reservation request packet toward the destination.
30. The method of claim 29, further comprising: determining, by the processors, a requested committed information rate (CIR) metadata from the LGS resource reservation request packet; determining, by the processors, a total ingress rate and a total egress rate at the network element; and determining, by the processors, whether to admit the flow based on a comparison of the requested CIR metadata, the total ingress rate, and the total egress rate.
31. The method of claim 30, further comprising admitting the flow when the total latency for the LGS resource reservation request is less than or equal to the deadline metadata, when the requested CIR metadata does not exceed the total allocable ingress rate, and when the requested CIR metadata does not exceed the total allocable egress rate.
32. The method of claim 31 , further comprising allocating resources for the flow and setting an admissible metadata in the LGS resource reservation request packet to an admitted value when the flow is admitted.
33. The method of claim 30, further comprising setting an admissible metadata in the LGS resource reservation request packet to a denied value when the total latency for the LGS resource reservation request is greater than the deadline metadata.
34. The method of claim 33, further comprising setting an allowed CIR metadata in the LGS resource reservation request to a lowest allowed CIR value along a path between the source and the network element when the requested CIR metadata exceeds the total ingress rate or the total egress rate.
35. The method of any of claims 29-34, further comprising: receiving a LGS response packet indicating results of the LGS resource reservation request packet; and releasing allocated resources for the flow when the LGS response packet indicates LGS setup is not successful.
36. The method of any of claims 29-35, wherein the LGS resource reservation request packet is received via data plane signaling.
37. The method of any of claims 29-36, further comprising determining, by the processors, a deadline metadata from the LGS resource reservation request packet.
38. The method of any of claims 29-37, further comprising determining, by the processors, whether to admit the flow based on a comparison of the total latency and the deadline metadata.
39. A non-transitory computer readable medium comprising a computer program product for use by a first node in a network, the computer program product comprising computer executable instructions stored on the non-transitory computer readable medium such that when executed by a processor cause the first node to perform the method of any of claims 1-38.
40. A source comprising: a transmitting means for transmitting a quality of service (QoS) resource reservation request packet along a path to a destination via data plane signaling, wherein the QoS resource reservation request packet requests QoS resources be provisioned for a flow; and a receiving means for receiving a QoS resource reservation response packet indicating results of the QoS resource reservation request packet.
41. The source of claim 40, further comprising a processing means, and wherein the receiving means, processing means, and transmitting means are configured to perform the method of any of claims 1-14.
42. A network element comprising: a receiving means for receiving a latency guarantee service (LGS) resource reservation request packet via data plane signaling between a source and a destination, wherein the LGS resource reservation request packet requests provisioning of network element resources to guarantee an end-to-end latency for a flow; a determining means for: determining a total latency for the LGS resource reservation request packet by adding a maximum latency generated by the network element plus an accumulated total latency metadata from the LGS resource reservation request packet; determining a deadline metadata from the LGS resource reservation request packet; and determining whether to admit the flow based on a comparison of the total latency and the deadline metadata; and a transmitting means for transmitting the LGS resource reservation request packet toward the destination.
43. The source of claim 42, further comprising a processing means, and wherein the receiving means, processing means, and transmitting means are configured to perform the method of any of claims 15-38.
44. A method implemented by a source, the method comprising: transmitting, by a transmitter of the source, a quality of service (QoS) resource reservation request packet, wherein the QoS resource reservation request packet is configured to cause nodes along a path from the source to a destination to perform the method of any of claims 15-36; and receiving, by a receiver of the source, a QoS resource reservation response packet indicating results of the QoS resource reservation request packet.
PCT/US2021/038276 2020-10-28 2021-06-21 In-band signaling for latency guarantee service (lgs) WO2021174236A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202063106632P 2020-10-28 2020-10-28
US63/106,632 2020-10-28
US202163165629P 2021-03-24 2021-03-24
US63/165,629 2021-03-24

Publications (3)

Publication Number Publication Date
WO2021174236A2 true WO2021174236A2 (en) 2021-09-02
WO2021174236A3 WO2021174236A3 (en) 2021-11-11
WO2021174236A8 WO2021174236A8 (en) 2023-04-20

Family

ID=76921337

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/038276 WO2021174236A2 (en) 2020-10-28 2021-06-21 In-band signaling for latency guarantee service (lgs)

Country Status (1)

Country Link
WO (1) WO2021174236A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023216726A1 (en) * 2022-05-07 2023-11-16 华为技术有限公司 Various network transmission methods and devices related thereto

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE448661T1 (en) * 2001-12-13 2009-11-15 Sony Deutschland Gmbh ADAPTIVE SERVICE QUALITY RESERVATION WITH PRIOR RESOURCE ALLOCATION FOR MOBILE SYSTEMS

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023216726A1 (en) * 2022-05-07 2023-11-16 华为技术有限公司 Various network transmission methods and devices related thereto

Also Published As

Publication number Publication date
WO2021174236A8 (en) 2023-04-20
WO2021174236A3 (en) 2021-11-11

Similar Documents

Publication Publication Date Title
CN109314710B (en) System and method for quality of service monitoring, policy enforcement and charging in a communication network
US7636781B2 (en) System and method for realizing the resource distribution in the communication network
US9185047B2 (en) Hierarchical profiled scheduling and shaping
CA2441319A1 (en) Policy-based synchronization of per-class resources between routers in a data network
EP1370970A1 (en) EDGE-BASED PER-FLOW QoS ADMISSION CONTROL IN A DATA NETWORK
Ghanwani et al. A framework for integrated services over shared and switched IEEE 802 LAN technologies
WO2016109970A1 (en) Network entity and service policy management method
WO2021174236A2 (en) In-band signaling for latency guarantee service (lgs)
KR100585934B1 (en) Method of Dynamic Management of Traffic Conditioner Parameter and Table of Class of Service on Router
Rashid et al. Traffic intensity based efficient packet schedualing
WO2021101610A1 (en) Latency guarantee for data packets in a network
US20240113987A1 (en) End-to-End Latency Guarantee for Downlink Traffic
Wood et al. Network quality of service for the enterprise: A broad overview
WO2022237860A1 (en) Packet processing method, resource allocation method and related device
JP6211257B2 (en) Communications system
Dong et al. New IP Enabled End-to-End Latency Guarantee for Downlink Traffic in 5G
Tian et al. Network Performance Architecture
Wei et al. A performance simulation and verification method of packet scheduling algorithms for data stream based on QoS
Kar et al. Issues and architectures for better quality of service (QoS) from the Internet
CN114501544A (en) Data transmission method, device and storage medium
Mehta et al. Quality of Service for Multimedia Adhoc Wireless Networks
Prasanna Dynamic resource management in RSVP-controlled unicast networks
Saklani et al. The Mechanism of Quality of Service (Qos) In Computer Networks and Its Methodologies
Bingöl QoS for real-time IP traffic
Srinivasan et al. Internet Engineering Task Force Anoop Ghanwani INTERNET DRAFT (Nortel Networks) J. Wayne Pace (IBM)

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21742256

Country of ref document: EP

Kind code of ref document: A2