EP4264892A2 - Auf garantierter latenz basierende weiterleitung - Google Patents

Auf garantierter latenz basierende weiterleitung

Info

Publication number
EP4264892A2
EP4264892A2 EP21713836.1A EP21713836A EP4264892A2 EP 4264892 A2 EP4264892 A2 EP 4264892A2 EP 21713836 A EP21713836 A EP 21713836A EP 4264892 A2 EP4264892 A2 EP 4264892A2
Authority
EP
European Patent Office
Prior art keywords
data packet
packet
time
node
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21713836.1A
Other languages
English (en)
French (fr)
Inventor
Toerless Tobias Eckert
Alexander Clemm
Stewart Bryant
Uma S. Chunduri
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP4264892A2 publication Critical patent/EP4264892A2/de
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/35Flow control; Congestion control by embedding flow control information in regular packets, e.g. piggybacking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Definitions

  • the disclosure generally relates to communication networks, and more particularly, to the transmission of packets over networks.
  • Packets are formatted units of data carried by computing nodes, such as servers, routers or switches, in a computer network. Packets can be received at a node in different traffic streams and can have different priorities. Based on the node’s capabilities and the different priorities, the packets are scheduled for transmission to a next node. Typically, the end-to-end latency, or the time it takes for a packet to travel from a sending node to a receiving node, can vary. However, this can be problematic for some applications which require packets to be delivered at precise times.
  • an apparatus for use with a node in a network.
  • the apparatus can comprise circuitry which makes up part of a node, for example.
  • the apparatus comprises: a non-transitory memory storage comprising instructions; one or more network interfaces configured to receive data packets; an output network interface configured to transmit data packets; and one or more processors in communication with the non-transitory memory storage and coupled to the plurality of input network interfaces and the output network interface.
  • the one or more processors execute the instructions to: receive a data packet from the one or more input network interfaces; enforce an earliest allowable time for transmittal of the data packet to a next node in the network based data extracted from an information field in the data packet; determine a transmittal time for the data packet which is no sooner than the earliest allowable time for transmittal; store an indication of a minimum delay for the data packet at the next node, based on the determined transmittal time, in the information field of the data packet; and transmit the data packet to the next node via the output network interface.
  • the one or more processors further execute the instructions to calculate: LATENCY - (Ttrans - Tarr + MinDelay), where LATENCY is a value known to the node, Ttrans is the determined transmittal time of the data packet, Tarr is an arrival time of the data packet, and MinDelay is a minimum delay identified by the data extracted from the information field in the data packet.
  • the one or more processors further execute the instructions to calculate: LATENCY - (Ttrans - Tarr - MinDelay)- SerDelay, where LATENCY is a value known to the node, Ttrans is the determined transmittal time of the data packet, Tarr is an arrival time of the data packet, MinDelay is a minimum delay identified by the data extracted from the information field in the data packet, and SerDelay, a serialization delay of the packet, is a size of the packet in bits divided by a transmittal rate in bits/sec of the output network interface.
  • the node is preconfigured with the LATENCY before the arrival time of the data packet.
  • the one or more processors further execute the instructions to extract a PRIORITY value from an information field in the data packet and determine the LATENCY based on the PRIORITY value.
  • the LATENCY can be determined by reading it from a pre-configured table of per-PRIORITY latency values at an index of the PRIORITY value.
  • the one or more processors further execute the instructions to determine the PRIORITY value based on a sequence of per-hop priority information fields and a hop count index in the data packet; a value of the hop count index after receiving the data packet determines an index of an item in the sequence which is used as the PRIORITY value; and the hop count index is incremented in the data packet before the data packet is transmitted to the next hop.
  • the one or more processors further execute the instructions to determine the minimum delay according to a clock which is asynchronous with a clock of the next node.
  • the one or more processors further execute the instructions to determine the transmittal time and the arrival time based on a clock whose timestamps are of only local significance and start to count time, e.g., from 0, from a time the node is powered on.
  • the data packet is in a traffic flow comprising a sequence of related packets; the data packet contains one or more fields that together constitute a traffic flow identifier, whose value is unique to packets of the traffic flow; and the one or more processors further execute the instructions to determine the minimum delay independently of the traffic flow identifier.
  • the one or more input network interfaces are configured to receive a plurality of traffic flows; and the LATENCY is no greater than a sum of burst sizes per packet and a maximum data packet size across the plurality of traffic flows divided by a rate at which data packets are serialized onto the output network interface.
  • the one or more processors execute the instructions to enqueue the data packet into a pre-existing scheduler, and the scheduler determines the transmittal time.
  • the one or more processors further execute the instructions to delay the data packet through a timed Push-On, First-Out (PIFO) queue where a PIFO rank of the data packet is the earliest allowable time of transmittal corresponding to a sum of the arrival time and a minimum delay identified by the data extracted from the information field in the data packet and a head of the PIFO queue is served no earlier than its rank.
  • PIFO Push-On, First-Out
  • the one or more processors further execute the instructions to: a) utilize a separate timed Push-On, First-Out (PIFO) queue for each value of PRIORITY, b) select a PIFO into which to insert the packet from the PRIORITY, where c) a PIFO rank of the packet is the earliest allowable time of transmittal, corresponding to a sum of the arrival time and a minimum delay identified by the data extracted from the information field in the data packet, and d) a head of each timed PIFO is served no earlier than its rank, when there is no timed PIFO of higher priority whose head can be served.
  • PIFO Push-On, First-Out
  • the one or more processors further execute the instructions to: a) utilize a separate timed FIFO for each combination (PRIORITY, INTERFACE) of the possible values of PRIORITY and a valid incoming INTERFACE index, b) select the FIFO into which to insert the packet from the PRIORITY and the interface from which the packet was received, where c) the timed FIFO rank of the packet is the earliest time of transmittal, corresponding to a sum of the arrival time and a minimum delay identified by the data extracted from the information field in the data packet, and d) the head of a timed FIFO is served when: i) a rank of the head of the FIFO is equal to or earlier than a current time, ii) there is no FIFO whose PRIORITY is higher and that can be served, and iii) there is no FIFO of equal PRIORITY whose head’s rank is earlier.
  • the data extracted from the information field comprises an extra delay experienced at the node.
  • the data extracted from the information field comprises a minimum delay value.
  • a method for use with a node in a network. The method comprises: receiving a data packet from one or more input network interfaces; enforcing an earliest allowable time for transmittal of the data packet to a next node in the network based on data extracted from an information field in the data packet; determining a transmittal time for the data packet which is no sooner than the earliest allowable time for transmittal; storing an indication of a minimum delay in the information field of the data packet; and transmitting the data packet to the next node via an output network interface.
  • the method further comprises calculating the indication of the minimum delay, where the calculating comprises calculating: LATENCY - (Ttrans -Tarr - MinDelay), where LATENCY is a value known to the node, Ttrans is the determined transmittal time of the data packet, Tarr is an arrival time of the data packet, and MinDelay is a minimum delay identified by the data extracted from the information field in the data packet.
  • the method further comprises calculating the indication of the minimum delay, where the calculating comprises calculating: LATENCY - (Ttrans - Tarr - MinDelay)- SerDelay, where LATENCY is a value known to the node, Ttrans is the determined transmittal time of the data packet, Tarr is an arrival time of the data packet, MinDelay is a minimum delay identified by the data extracted from the information field in the data packet, and SerDelay is a serialization delay of the packet.
  • the method further comprises pre-configuring the node with the LATENCY before the arrival time of the data packet.
  • the method further comprises extracting a PRIORITY value from an information field in the data packet and determining the LATENCY based on the PRIORITY value.
  • the method further comprises determining the PRIORITY value based on a sequence of per-hop priority information fields and a hop count index in the data packet, wherein a value of the hop count index after receiving the data packet determines an index of an item in the sequence which is used as the PRIORITY value, and the hop count index is incremented in the data packet before the data packet is transmitted to the next hop.
  • the method further comprises determining the minimum delay according to a clock which is asynchronous with a clock of the next node.
  • the method further comprises determining the transmittal time and an arrival time based on a clock whose timestamps are of only local significance and start to count time from a time the node is powered on.
  • the method further comprises determining the minimum delay independently of a traffic flow identifier, wherein the data packet is in a traffic flow comprising a sequence of related packets, and the data packet contains one or more fields that together constitute the traffic flow identifier, whose value is unique to packets of the traffic flow.
  • the plurality of input network interfaces are configured to receive a plurality of traffic flows, and the method further comprises determining LATENCY as a value which is no greater than a sum of burst sizes per packet and a maximum data packet size across the plurality of traffic flows divided by a rate at which data packets are serialized onto the output network interface.
  • the method further comprises, after the enforcing of the earliest allowable time for transmittal of the data packet, enqueuing the data packet into a pre-existing scheduler, wherein the scheduler determines the transmittal time.
  • the enforcing of the earliest allowable time for transmittal comprises delaying the data packet through a timed Push-On, First-Out (PIFO) queue where a PIFO rank of the data packet is the earliest allowable time of transmittal corresponding to a sum of an arrival time and a minimum delay identified by the data extracted from the information field in the data packet and a head of the PIFO queue is served no earlier than its rank.
  • PIFO Push-On, First-Out
  • the method further comprises: utilizing a separate timed Push-On, First-Out (PIFO) queue for each value of PRIORITY; selecting a PIFO into which to insert the packet from the PRIORITY, where a PIFO rank of the packet is the earliest allowable time of transmittal, corresponding to a sum of an arrival time and a minimum delay identified by the data extracted from the information field in the data packet; and serving a head of each timed PIFO no earlier than its rank, when there is no timed PIFO of higher priority whose head can be served.
  • PIFO Push-On, First-Out
  • the method further comprises: utilizing a separate timed FIFO for each combination (PRIORITY, INTERFACE) of possible values of PRIORITY and a valid incoming INTERFACE index; selecting the FIFO into which to insert the packet from the PRIORITY and the interface from which the packet was received, where the timed FIFO rank of the packet is the earliest time of transmittal, corresponding to a sum of an arrival time and a minimum delay identified by the data extracted from the information field in the data packet; and serving a head of a timed FIFO when a rank of the head of the FIFO is equal to or earlier than a current time, there is no FIFO whose PRIORITY is higher and that can be served, and there is no FIFO of equal PRIORITY whose head’s rank is earlier.
  • PRIORITY INTERFACE
  • the data extracted from the information field comprises an extra delay experienced at the node.
  • an apparatus comprises: one or more input network interfaces at a current node in a network, the one or more input network interfaces are configured to receive a data packet, the data packet comprising an indication of a first minimum delay to be enforced at the current node before transmitting the data packet to a next node in the network; and an output network interface at the current node, the output network interface is configured to transmit the data packet to the next node at a time which is no earlier than an earliest allowable time for transmitting the data packet at the current node based on the first minimum delay, the transmitted data packet comprises an indication of a second minimum delay to be enforced at the next node before transmitting the data packet in the network.
  • the second minimum delay provides a specified latency between the earliest allowable time for transmitting the data packet at the current node and an earliest allowable time for transmitting the data packet at the next node.
  • the data packet comprises a priority value; and the specified latency is based on the priority value.
  • the data packet comprises a hop count index; and the priority value is based on the hop count index.
  • the second minimum delay is a decreasing function of a difference between a transmittal time for the data packet at the current node and the earliest allowable time for transmitting the data packet at the current node.
  • the indication of the first minimum delay comprises an information field in the data packet storing the first minimum delay.
  • the indication of the first minimum delay comprises an information field in the data packet storing an extra delay.
  • a method comprises: receiving a data packet at one or more input network interfaces of a current node in a network, the data packet comprising an indication of a first minimum delay to be enforced at the current node before transmitting the data packet to a next node in the network; and at an output network interface at the current node, transmitting the data packet to the next node at a time which is no earlier than an earliest allowable time for transmitting the data packet at the current node based on the first minimum delay, the transmitted data packet comprises an indication of a second minimum delay to be enforced at the next node before transmitting the data packet in the network.
  • a non-transitory memory storage comprises instructions for causing a computer to perform the steps of: receiving a data packet from an input network interface; enforcing an earliest allowable time for transmittal of the data packet to a next node in a network based on data extracted from an information field in the data packet; determining a transmittal time for the data packet which is no sooner than the earliest allowable time for transmittal; storing an indication of a minimum delay for the data packet at the next node, based on the determined transmittal time, in the information field of the data packet; and transmitting the data packet to the next node via an output network interface.
  • FIG. 1A illustrates a range of delays between a lower bound LB and an upper bound UB.
  • FIG. 1 C illustrates an example of a minimum latency guarantee, which has a lower bound LB but no upper bound UB on latency.
  • FIG. 1D illustrates a best effort model, where there is no guarantee on the upper bound UB.
  • FIG. 1E illustrates an example of an on-time guarantee, where the lower bound LB is equal to the upper bound, or, generalizing, there is a relatively narrow range of time between LB and UB.
  • FIG. 2 illustrates an example communication system for communicating data.
  • FIG. 3A illustrates an example network 300 comprising a series of nodes.
  • FIG. 3B is a schematic diagram illustrating example details of a node 400, such as any of the nodes in the network of FIG. 3A.
  • FIG. 4A depicts an example of delays experienced by a packet at Node(i) and Node(i+1), including scheduling delays SchedDelay(i) and SchedDelay(i+1), respectively, and a corresponding hop latency, Hop_Latency(i)£MAX+non-scheduling latency.
  • FIG. 4B1 depicts the time period MAX consistent with FIG. 4B.
  • FIG. 4B2 depicts example process steps for determining MinDelay(i+1) in FIG. 4B.
  • FIG. 4C depicts an example of delays experienced by a packet at Node(i) and Node(i+1), including forced minimum delays, MinDelay(i) and MinDelay(i+1), respectively, additional delays, AddDelay(i) and AddDelay(i+1), respectively, scheduling delays, SchedDelay(i) and SchedDelay(i+1), respectively, and a corresponding hop latency, Hop_Latency(i).
  • FIG. 4C1 depicts the time period LATENCY(i) consistent with FIG. 4C.
  • FIG. 4C2 depicts example process steps for determining MinDelay(i+1) in FIG. 4C.
  • FIG. 4D depicts an example of delays experienced by a packet at Node(i) and Node(i+1), including forced minimum delays, MinDelay(i) and MinDelay(i+1), respectively, additional delays, AddDelay(i) and AddDelay(i+1), respectively, scheduling delays, SchedDelay(i) and SchedDelay(i+1), respectively, serialization delays, SerDelay(i) and SerDelay(i+1 ), respectively, and a corresponding hop latency, Hop_Latency(i).
  • FIG. 4D1 depicts the time period LATENCY(i) consistent with FIG. 4D.
  • FIG. 4D2 depicts example process steps for determining MinDelay(i+1) in FIG. 4D.
  • FIG. 5A is flowchart of an example process for latency-based forwarding of packets, consistent with FIGS. 4B to 4C2.
  • FIG. 5B is flowchart of an example process for performing steps 503 and 504 of FIG. 5A using a Push-In, First-Out (PIFO) buffer.
  • PIFO Push-In, First-Out
  • FIG. 5C is flowchart of an example process for performing steps 503 and 504 of FIG. 5A using a First-In, First-Out (FIFO) buffer.
  • FIFO First-In, First-Out
  • FIG. 6A depicts an example format of a packet consistent with the processes of FIG. 5A to 5C.
  • FIG. 6B depicts an example format of the table field 618 of the packet of FIG. 6A.
  • FIG. 7 depicts an example of processes performed by Node(i) and Node(i+1) of FIG. 3A, consistent with FIG. 5A.
  • FIG. 8 depicts an example of processes performed by Node(i) of FIG. 3A, consistent with FIGS. 5A, 5B and 6A, where PIFO buffers are used.
  • FIG. 9 depicts an example of processes performed by Node(i) of FIG. 3A, consistent with FIGS. 5A, 5C and 6A, where FIFO buffers are used.
  • FIG. 10 depicts an example of PIFO buffers, consistent with FIG. 8.
  • FIG. 11 depicts an example of FIFO buffers, consistent with FIG. 9.
  • Networks which provide deterministic, guaranteed end-to-end latencies with minimal latency fluctuations (jitter) and packet loss are useful in many applications.
  • Example applications include Virtual Reality/Augmented Reality (VR/AR), which can have stringent limits on the maximum motion-to-photon time, such as to avoid dizziness and reduced quality of experience that can result from longer delays and may severely reduce user acceptance.
  • VR/AR Virtual Reality/Augmented Reality
  • Tactile Internet which has stringent limits to delay for haptic feedback, as a lack of sensation of being in control or sluggish control would make many applications infeasible.
  • V2X vehicle to everything
  • LoT Internet of Things
  • robot collaboration e.g., robots collaborating to lift an object
  • high-precision measurements e.g., remote polling at exact intervals
  • FIGS. 1 A-1 E illustrate the concept of latency-based forwarding of packets for various SLOs. Each figure depicts an allowable delay for the delivery of a packet over a network from a sending network device (or node) to a receiving network device. This can refer to a delay between adjacent nodes in a succession of nodes or to a delay between a sending node, which originates the packet, and an end node, which is the end user of the packet.
  • FIG. 1A illustrates a range of delays between a lower bound (“LB”) and an upper bound (“UB”).
  • LB and UB are the earliest and latest times, respectively, at which a packet is allowed to reach its destination.
  • a non-zero LB can be useful to avoid overloading a receiving node with too many packets at once from a sending node.
  • the UB ensures that packets are received while their data is still timely for the receiving node and that an application using the packets is sufficiently responsive.
  • the range between LB and UB is too large, the packet delay can vary widely, resulting in jitter or other inconsistent performance.
  • LB no lower bound
  • FIG. 1 C illustrates an example of a minimum latency guarantee, which has a lower bound LB but no upper bound UB on latency. In this case, there is a non-zero minimum latency but no maximum latency.
  • a disadvantage is that the packets may not be received while their data is still timely for the receiving node.
  • FIG. 1 D illustrates a best effort model, where there is no guarantee on the upper bound UB. There is also no minimum LB. This approach can suffer from the drawbacks mentioned above.
  • FIG. 1 E illustrates an example of an on-time guarantee, where the lower bound LB is equal to the upper bound, or, generalizing, there is a relatively narrow range of time between LB and UB.
  • the relatively narrow range of time can be, e.g., up to 1%, 5% or 10% of a baseline value.
  • This range of time represents an allowable packet delay variation or jitter, and can be set as a quality of service parameter.
  • the allowable jitter can vary based on the application. For instance, in an example medical application, tele-surgery with haptic feedback may require a latency of 3-10 msec with a jitter of ⁇ 2 msec.
  • a power grid system may require a latency of approximately 8 msec with a jitter of a few psec.
  • a few psec. may refer to up to 5 or 10 psec., for example.
  • a high-frequency trading system may require a latency of ⁇ 1 msec and a jitter of a few psec.
  • an Avionics Full-Duplex Switched Ethernet (AFDX) network may require a latency of 1-128 msec with a jitter of a few psec.
  • AFDX Avionics Full-Duplex Switched Ethernet
  • an advanced driver assistance system may require a latency of 100-250 psec.
  • a power train chassis control system may require a latency of ⁇ 10 psec., both with a jitter of a few psec.
  • an augmented reality system may require a latency of 7-20 msec.
  • a professional audio/video system may require a latency of 2-50 msec., both with a jitter of a few psec.
  • a desired latency can be implemented for the transfer of a data packet between nodes.
  • the desired latency can be, e.g., a specified or predetermined latency.
  • a minimum delay and a corresponding earliest allowable time for transmission, can be enforced at each node when the packet is received.
  • the minimum delay can be carried in an information field in the packet.
  • a new value for the minimum delay is calculated and stored in the field for use by the next node.
  • the minimum delay can be considered to be a forced delay since the packet is forced to wait at least for this time period before it can be transmitted.
  • the minimum delay need not be expressly carried in the packet.
  • the packet could carry an indication of an extra delay already experienced by the packet (in addition to the forced minimum delay), which can then be used to determine the minimum delay at the next node, as further described below.
  • the extra delay could include an additional delay, AddDelay, a scheduling delay, SchedDelay, and a serialization delay, SerDelay, such as depicted in FIG. 4D, for example.
  • the extra delay could be Trans(i)- Tearliest(i), or Trans(i)-Tearliest(i)+SerDelay(i) in FIG. 4D, for example.
  • the latency can represent a time period between corresponding processing points in successive nodes, such as Node(i) and Node(i+1 ) in FIG. 3A.
  • the latency, LATENCY(i) can encompass an additional delay, AddDelay(i), and a time used in scheduling a packet for transmission, SchedDelay(i), at Node(i) and a minimum delay, MinDelay(i+1), which is enforced at Node(i+1 ) when the packet is received.
  • the additional delay can involve a time used in transferring packets within the Node(i), for example. For example, this can be the time used in transferring the packet from the delay circuit 425 to the queue 440 in FIG. 3B.
  • the scheduling delay can vary based on factors such as the number of other packets competing for limited transmission bandwidth at the node.
  • the minimum delay is reduced correspondingly for the next node.
  • the minimum delay is increased correspondingly for the next node.
  • the latency can be set at each node based on a respective pre-configured value or based on information carried in the packet. In some cases, the latency can be based on a priority field in the packet, a hop count index field in the packet, and/or an input network interface at which the packet is received at a node.
  • the minimum delay can be enforced in various ways. In one approach, an earliest allowable transmittal time is calculated for a packet when it is received. In another approach, a countdown timer is used.
  • the packet is delayed for the minimum delay period and then scheduled for transmission in a process which competes with other packets for access to an output network interface.
  • the packet can be stored in a buffer with a dequeue time corresponding to the earliest allowable time for transmission. Once the dequeue time is reached, the packet is scheduled for transmission on the output network interface.
  • FIG. 2 illustrates an example communication system 100 for communicating data.
  • the communication system 100 includes, for example, user equipment 110A, 110B, and 110C, radio access networks (RANs) 120A and 120B, a core network 130, a public switched telephone network (PSTN) 140, the Internet 150, and other networks 160.
  • RANs radio access networks
  • PSTN public switched telephone network
  • Additional or alternative networks include private and public data-packet networks, including corporate intranets. While certain numbers of these components or elements are shown in the figure, any number of these components or elements may be included in the system 100.
  • the communication system 100 can include a wireless network, which may be a fifth generation (5G) network including at least one 5G base station which employs orthogonal frequency-division multiplexing (OFDM) and/or non- OFDM and a transmittal time interval (TTI) shorter than 1 milliseconds (e.g. 100 or 200 microseconds), to communicate with the user equipment.
  • a base station may be evolved node B (eNB) BS in a fourth generation (4G) long term evolution (LTE) network, or a next generation node B (gNB) BS in a fifth generation (5G) new radio (NR) network.
  • the network may further include a network server for processing information received from the communication devices via the BS.
  • System 100 enables multiple users to transmit and receive data and other content.
  • the system 100 may implement one or more channel access methods, such as but not limited to code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), or single-carrier FDMA (SC-FDMA).
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal FDMA
  • SC-FDMA single-carrier FDMA
  • the user equipment (UE) 110A, 110B, and 110C is configured to operate and/or communicate in the system 100.
  • the UE can be configured to transmit and/or receive wireless or wired signals.
  • Each UE represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device, wireless transmit/receive unit, mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, wireless sensor, wearable devices, consumer electronics device, device-to-device (D2D) user equipment, machine type user equipment or user equipment capable of machine-to-machine (M2M) communication, iPads, Tablets, mobile terminals, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, or other non-limiting examples of user equipment.
  • PDA personal digital assistant
  • D2D device-to-device
  • M2M machine type user equipment or user equipment capable of machine-to-machine
  • M2M machine-to
  • the RANs 120A and 120B include one or more base stations (BSs) 170A and 170B, respectively.
  • BSs base stations
  • Each of the BSs is configured to wirelessly interface with one or more of the UEs to enable access to the core network 130, the PSTN 140, the Internet 150, and/or the other networks 160.
  • the BSs may include one or more of a base transceiver station (BTS), a Node-B (NodeB), an evolved NodeB (eNB), a next (fifth) generation (5G) NodeB (gNB), a Home NodeB, a Home eNodeB, a site controller, an access point (AP), or a wireless router, or a server, router, switch, or other processing entity within a wired or wireless network.
  • BTS base transceiver station
  • NodeB Node-B
  • eNB evolved NodeB
  • 5G NodeB
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next (fifth) generation
  • gNB next
  • the BS 170B forms part of the RAN 120B, which may include one or more other BSs, elements, and/or devices.
  • Each of the BSs operates to transmit and/or receive wireless signals within a particular geographic region or area, sometimes referred to as a “cell.”
  • multiple-input multiple-output (MIMO) technology may be employed having multiple transceivers for each cell.
  • the BSs 170A and 170B communicate with one or more of the UEs 110A- 110C over one or more air interfaces using wireless communication links.
  • the air interfaces may utilize any suitable radio access technology.
  • the BSs and UEs are configured to implement the LTE communication standard (LTE), LTE Advanced (LTE-A), and/or LTE Multimedia Broadcast Multicast Service (MBMS).
  • LTE LTE communication standard
  • LTE-A LTE Advanced
  • MBMS LTE Multimedia Broadcast Multicast Service
  • the base stations 170A and 170B and user equipment 110A-110C are configured to implement Universal Mobile Telecommunications Service (UMTS), High Speed Packet Access (HSPA), or HSPA+ standards and protocols. Of course, other multiple access schemes and wireless protocols may be utilized.
  • UMTS Universal Mobile Telecommunications Service
  • HSPA High Speed Packet Access
  • HSPA+ High Speed Packet Access
  • the RANs are in communication with the core network 130 to provide the UEs 110A-110C with voice, data, application, Voice over Internet Protocol (VoIP), or other services.
  • the RANs and/or the core network 130 may be in direct or indirect communication with one or more other RANs (not shown).
  • the core network 130 may also serve as a gateway access for other networks (such as PSTN 140, Internet 150, and other networks 160).
  • some or all of the UEs 110 may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols.
  • the RANs may also include millimeter and/or microwave access points (APs).
  • the APs may be part of the BSs or may be located remote from the BSs.
  • the APs may include, but are not limited to, a connection point (an mmW or millimeter wave CP) or a BS capable of mmW communication (e.g., a mmW base station).
  • the mmW APs may transmit and receive signals in a frequency range, for example, from 24 GHz to 100 GHz, but are not required to operate throughout this range.
  • the term base station is used to refer to a base station and/or a wireless access point.
  • FIG. 2 illustrates one example of a communication system
  • the communication system 100 could include any number of user equipment, base stations, networks, or other components in any suitable configuration.
  • the term user equipment may refer to any type of wireless device communicating with a radio network node in a cellular or mobile communication system.
  • the networks 130, 140, 150 and/or 160 can be packet-switched networks which transfer formatted units of data in packets between nodes, as described further below.
  • the embodiments presented below involve the transmission of such packets over networks and the management of latencies of such transmissions.
  • FIG. 3A illustrates an exemplary network 300 comprising a series of nodes.
  • the nodes include Node(i-1 ) through Node(i+3).
  • a packet is transmitted from Node(i-1 ) as a sending node and received at Node(i+3) as a receiving node.
  • the packet passes through Node(i), Node(i+1 ) and Node(i+2) in turn before reaching Node(i+3).
  • the send and receiving nodes can be servers or other end computing devices, for example.
  • the intermediate nodes(i) through Node(i+2) can be routers, networking switches, servers, or other networking devices which forward the packet from the sending node to the receiving node in turn, one node at a time.
  • a data link connects each successive pair of nodes.
  • a data link L1 connects Node(i-1 ) and Node(i)
  • a data link L2 connects Node(i) and Node(i+1 )
  • a data link L3 connects Node(i+1) and Node(i+2)
  • a data link L4 connects Node(i+2) and Node(i+3).
  • Each data link can comprise a wired or wireless communication path.
  • each link has a respective known latency. Although, the link latency does not have to be known for the techniques disclosed herein to be used.
  • the propagation time on a link for consecutive packets can vary by a small, known amount, such as ⁇ 1 % relative to a baseline delay. The propagation time is depicted in FIG. 4D as Prop., for example.
  • a network control/management plane 310 can be communicatively coupled to each of the nodes, as represented in dashed lines.
  • the control/management plane can be used to perform a variety of control path and/or control plane functions, such as implementing routing and label distribution protocols used to forward packets.
  • a current node represents a node which is currently processing the packet.
  • the prior node is the node from which the current node receives the packet.
  • the next node is the node to which the current node transmits the packet.
  • FIG. 3B is a schematic diagram illustrating exemplary details of a node 400, such as any of the nodes in the network of FIG. 3A.
  • the node can be configured to implement embodiments of the present technology disclosed herein.
  • the node 400 may comprise a number of input network interfaces 410a, 410b and 410c and an output network interface 430. It is possible to have multiple output network interfaces as well. In this configuration, multiple traffic flows can be received concurrently on the different input network interfaces, resulting in contention for access to the output network interface.
  • Each input network interface can be identified by a respective INTERFACE index, e.g., 1 , 2 or 3 in this case for input network interfaces 410a, 410b and 410c, respectively.
  • a traffic flow in a packet switching network can refer to a sequence of related packets which are sent from a sending node to one or more receiving nodes.
  • a traffic flow could include packets in a transport connection such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP).
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • a traffic flow could also refer to a sequence of related packets in a media stream such as a video or audio stream.
  • the traffic flow to which a packet belongs can be identified from one or more information fields in the packet, typically in the header of the packet. For example, see the Traffic flow id field 619 in FIG. 6A.
  • the input and output network interfaces are shown as being separated into an input section and an output section, each interface could be used for input or output operations. Similarly, although a separate receiver 412 and transmitter 450 are depicted, these functions could be combined in a transceiver. [00111]
  • the packets received at the input network interfaces are provided to a receiver 412 and then to a processor 420.
  • the receiver can perform tasks such as deserializing the packet from a series of bits into a data structure which can be stored in the storage component 422.
  • the processor 420 can comprise one or more processing circuits 421 and a storage component 422. In one approach, one processing circuit is provided on an ingress line card, and another processing circuit is provided on an egress line card.
  • the storage component 422 can be based on available memory technologies, and may comprise a cache 422a, e.g., a volatile RAM such as SRAM or DRAM, and a long-term storage component 422b, e.g., a non-volatile memory such as flash NAND memory or other memory technology.
  • the storage component 422 can be used for storing both data and instructions for implementing the packet forwarding techniques described here.
  • the storage component may be an example of a non-transitory storage device or a non-transitory memory storage.
  • the processing circuit may be configured to execute the instructions to perform the processed described herein.
  • the delay circuit 425 can enforce a minimum delay for a packet before the packet can be transmitted to the next node. In one approach, the delay circuit enforces a minimum delay by storing the packet until an earliest allowable time for transmittal, Tearliest.
  • the delay circuit may comprise a PIFO buffer, for example.
  • a Push-In, First-Out (PIFO) buffer or queue is a priority queue data structure that allows packets to be "pushed" or enqueued into an arbitrary location in the queue, but only dequeued from the head of the queue. For a delay circuit, the packet can be scheduled to be dequeued at a time corresponding to the desired delay.
  • PIFO Push-In, First-Out
  • the packet can then be transferred to one of the queues 440 to be scheduled for output on the output l/F 430.
  • the packet is stored in the queue until the output l/F is free and the packet has priority for transmittal.
  • This is an example of a two-stage queuing process.
  • the delay circuit 425 is not used and the packet is stored in one of the queues 440 where it waits for Tearliest and may then wait further until the output l/F is free and the packet has priority for transmittal. This is an example of a single-stage queuing process.
  • the queues can comprise one or more queues, also referred to as buffers, such as Push-In, First-Out (PIFO) or First-In, First-Out (FIFO) queues. See also FIG. 10 and 11 .
  • the queues generally store packets from the multiple input traffic streams until the packets can be transmitted based on scheduling techniques implemented by the scheduler 426.
  • Various scheduling techniques can be used, such as round robin if the traffic flows have equal priority, or weighted round-robin if the traffic flows have different priorities. Other, more advanced scheduling techniques are possible as well.
  • the scheduler can be a real-time per-packet scheduler that releases a data packet no earlier than the earliest allowable time for transmittal.
  • the transmitter 450 can perform tasks such as serializing the packet into a series of bits for transmission on a data link.
  • the storage component 422 can comprise a non-transitory memory storage storing computer readable instructions that are executed by the processor 420, or generally, by one or more processors, to implement embodiments of the present technology. It is also possible for embodiments of the present technology described below to be implemented, at least partially, using hardware logic components, such as, but not limited to, Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, and so forth.
  • FPGAs Field-programmable Gate Arrays
  • ASICs Application-specific Integrated Circuits
  • ASSPs Application-Specific Standard Products
  • SOCs System-on-a-chip systems
  • CPLDs Complex Programmable Logic Devices
  • special purpose computers and so forth.
  • FIG. 4A depicts an example of delays experienced by a packet at Node(i) and Node(i+1 ), including scheduling delays SchedDelay(i) and SchedDelay(i+1 ), respectively, and a corresponding hop latency, Hop_Latency(i)£MAX+non-scheduling latency.
  • the packet is received at an arrival time, Tarr(i), and transmitted at a transmittal time, Ttrans(i). Subsequently, at Node(i+1), the packet is received at an arrival time, Tarr(i+1 ), and transmitted at a transmittal time, Ttrans(i+1), and so forth.
  • the scheduling delays are known or are assumed to be less than or equal to a maximum value, MAX. In the examples below, where MinDelay is used, if the scheduling delays at Node(i) are too large, the MinDelay(i+1 ) at Node(i+1) can be set to 0 but this is not enough to compensate for the scheduling delays, so that the desired node-to-node latency is not achieved.
  • SchedDelay(i) represents a time in which the packet waits to be transmitted on the output network interface, e.g., due to competition with other packets.
  • SchedDelay(i) becomes longer. This would result in a smaller MinDelay(i+1 ), as described later with respect to FIG. 4B.
  • SchedDelay(i+1 ) is typically independent of SchedDelay(i). For example, in one possible situation, the competing traffic flows causing a high SchedDelay(i) at Node(i) do not affect SchedDelay(i+1 ). Instead, other competing traffic arriving at Node(i+1) affects SchedDelay(i+1).
  • the hop latency is less than or equal to the sum of MAX and non-scheduling latencies such as a serialization delay and a propagation time of the packet on the link between the nodes. This is an example of an in-time guarantee as in FIG. 1 B.
  • the non-scheduling latency refers to the latency or delay components of Flop_Latency(i) other than SchedDelay(i) and can include SerDelay and the propagation time of the packet between the nodes, among other possible delays.
  • MinDelay(i) a minimum delay, MinDelay(i) is enforced before the packet is allowed to be transmitted.
  • the minimum delay can be set by the prior node from which the packet is received, and can be read or extracted from an information field in the header of the packet. See the field 613 in FIG. 6A, for example.
  • To delay the packet it can be stored in the delay circuit 425 and/or one of the queues 440, for example.
  • the packet is stored for a period of time corresponding to the minimum delay without undergoing further processing. Flowever, it is possible for some processing to be performed on the packet during this minimum delay, such as updating headers used for routing the packet. The packet is not scheduled for transmission until after the minimum delay has elapsed.
  • Tearliest(i) denotes an earliest allowable transmittal time of the packet, and is obtained by summing an arrival time, Tarr(i) with MinDelay(i).
  • Tearliest(i+1) denotes an earliest allowable transmittal time of the packet, and is obtained by summing Tarr(i+1 ) with MinDelay(i+1 ), and so forth.
  • the scheduling delays represent a time in which the packet waits to be transmitted on the output network interface, e.g., due to competition with other packets. This delay may be initially unknown, but the transmittal time, Ttrans(i) in Node(i) or Ttrans(i+1) in Node(i+1), can be determined as the current time when the packet has its turn for transmittal.
  • a packet has its turn for transmittal, e.g., when it is ready to be transmitted and there is no other higher priority packet waiting to be transmitted.
  • a packet may be ready to be transmitted when the earliest allowable time for transmission has been reached, or sometime after.
  • MinDelay(i+1 ) is less than MinDelay(i).
  • MinDelay(i) is less than MinDelay(i). This demonstrates how the minimum delay can change in successive nodes that are traversed by a packet.
  • the minimum delay at a given node is adjusted to account for the time period of Ttrans-Tearliest at the prior node from which the packet is received.
  • the minimum delay at the given node is smaller when the time period of Ttrans-Tearliest at the prior node is larger. That is, the minimum delay at the given node is a decreasing function of the time period of Ttrans-Tearliest at the prior node.
  • FIG. 4B1 depicts the time period MAX consistent with FIG. 4B.
  • MAX SchedDelay(i)+MinDelay(i+1 )+NSL (non-scheduling latency).
  • this can be considered the desired delay for each hop, with the SchedDelay at an earlier node being unpredictable and the MinDelay at a later node adjusted to counteract the variation in the SchedDelay.
  • FIG. 4B2 depicts example process steps for determining MinDelay(i+1 ) in FIG. 4B.
  • the steps include measuring, directly or indirectly, SchedDelay(i) on Node(i).
  • a further step includes forcing a delay of MinDelay(i+1 ) on Node(i+1 ).
  • FIG. 4C depicts an example of delays experienced by a packet at Node(i) and Node(i+1 ), including forced minimum delays, MinDelay(i) and MinDelay(i+1 ), respectively, additional delays, AddDelay(i) and AddDelay(i+1), respectively, scheduling delays, SchedDelay(i) and SchedDelay(i+1 ), respectively, and a corresponding hop latency, Hop_Latency(i).
  • the additional delays are optional and can represent a time used in transferring the packet from one location to another in the node. For example, this can be the time used in transferring the packet from the delay circuit 425 to the queue 440 in FIG. 3B. This delay may be unknown. In this case, the delay circuit may itself implement a queue so that the packet is stored in two consecutive queues: the delay circuit 425 and then the queue 440.
  • the scheduling can occur immediately afterwards without transferring the packet to another location and incurring an additional delay.
  • a time period t1 is the time between Tearliest(i) and Ttrans(i).
  • the time period t1 represents AddDelay(i)+SchedDelay(i), or Ttrans(i)-Tearliest(i).
  • t1 is an example of an extra delay, which is a delay after MinDelay.
  • a time period t2 is the time between Tarr(i+1 ) and Tearliest(i+1 ).
  • Flop_Latency(i) AddDelay(i) + SchedDelay(i) + MinDelay(i+1 ) + non-scheduling latency.
  • FIG. 4C1 depicts the time period LATENCY(i) consistent with FIG. 4C.
  • LATENCY(i) AddDelay(i)+SchedDelay(i)+MinDelay(i+1 ). Like MAX in FIG. 4B1 , this can conceptually be considered the desired latency (or delay) for each hop.
  • FIG. 4C2 depicts example process steps for determining MinDelay(i+1) in FIG. 4C.
  • the steps provide a calculation on Node(i) once Ttrans(i) is known.
  • MinDelay(i+1 ) is therefore equal to LATENCY(i) minus an extra delay.
  • SchedDelay(i) and AddDelay(i) are upfront unknown, but a (likely) maximum of LATENCY(i) may be calculated.
  • LATENCY(i) is large enough so that LATENCY(i)>(Ttrans(i)-Tarr(i)-MinDelay(i)).
  • the formula for t1 calculates AddDelay(i)+SchedDelay(i) from the measured values of Tarr(i) and Ttrans(i). Then, MinDelay(i+1 ) can be calculated and sent to Node(i+1 ).
  • FIG. 4D depicts an example of delays experienced by a packet at Node(i) and Node(i+1 ), including forced minimum delays, MinDelay(i) and MinDelay(i+1 ), respectively, additional delays, AddDelay(i) and AddDelay(i+1), respectively, scheduling delays, SchedDelay(i) and SchedDelay(i+1 ), respectively, serialization delays, SerDelay(i) and SerDelay(i+1 ), respectively, and a corresponding hop latency, Hop_Latency(i).
  • the serialization delay, SerDelay(i) accounts for the time to transmit the packet on a link between nodes.
  • the node can determine SerDelay(i) based on a packet size (length), e.g., in bits, divided by a transmittal rate of the output network interface in bits per second.
  • SerDelay(i) can be determined for each packet based on its packet size. The size can be read from a field in the packet. For example, IP packets have a total length field. This is a 16-bit field which indicates the entire packet size in bytes, including the header and the payload data.
  • a time period t1 is the time between Tearliest(i) and Ttrans(i).
  • the time period t1 represents AddDelay(i)+SchedDelay(i), or Ttrans(i)- Tearliest(i).
  • a time period t2 is the time between Tarr(i+1 ) and Tearliest(i+1 ).
  • FIG. 4D1 depicts the time period LATENCY(i) consistent with FIG. 4D.
  • LATENCY(i) AddDelay(i)+SchedDelay(i)+SerDelay(i)+MinDelay(i+1 ).
  • FIG. 4D2 depicts example process steps for determining MinDelay(i+1) in FIG. 4D.
  • the steps provide a calculation on Node(i) once Ttrans(i) is known.
  • Ttrans(i) is the time when the packet starts to be serialized onto the output network interface, i.e. , the first bit of the packet is transmitted
  • Tarr(i+1 ) is the time when the packet has completely arrived on Node(i+1 ), i.e., the last bit of the packet has arrived.
  • FIG. 5A is flowchart of an example process for latency based forwarding of packets, consistent with FIGS. 4B to 4C2. The processing is discussed in connection with an example node, Node(i).
  • Step 500 includes, at Node(i), receiving a packet from a previous node, Node(i-1 ), on an input network interface (l/F) (e.g., 410a in FIG. 3B) and storing the corresponding arrival time, Tarr(i).
  • Step 501 includes extracting, e.g., reading, a minimum delay, MinDelay(i), from an information field in the packet. For example, see the field 613 in FIG. 6A. If MinDelay(i) ⁇ 0, the packet may be discarded, in one approach. This may occur when Ttrans-Tearliest is excessively long at the prior node. A discarded packet may be detected by an error detection process and re-sent by the sending node.
  • MinDelay(i) if MinDelay(i) would be less than 0, it can optionally be set equal to zero in the packet. Similarly, a negative MinDelay(i) can lead to the next node reducing MinDelay(i+1) accordingly such that the packet can potentially make-up for the previous excessive delay.
  • Tearliest(i) is a time of a clock. This could be a time of day or an amount of time since the node was activated, for example. The node then ensures that the packet is held at least until this time is reached.
  • Step 503 includes delaying further processing, such as scheduling, of the packet until at least Tearliest(i).
  • one or more processors execute instructions to enqueue a packet into a pre-existing scheduler after step 503, where this pre-existing scheduler is responsible for performing step 504.
  • the pre-existing scheduler can be a simple queue, for example.
  • Tearliest(i) it is possible to start a countdown timer which will count down from MinDelay(i) to zero, at which time the transmission of the packet is allowed. Steps 502 and 503 therefore more generally involve enforcing an earliest allowable transmittal time for the packet.
  • Step 504 involves scheduling transmission of the packet to the output network l/F (e.g., 430 in FIG. 3B) and determining the transmittal time, Ttrans(i), as a determined transmittal time.
  • the scheduling can involve performing a process in which the packet is allowed to access the output network l/F.
  • the scheduling process can arbitrate access to the output network l/F from among competing packets in respective competing traffic flows. That is, the scheduling process schedules access to the output network interface by a data packet among competing data packets.
  • the scheduling can identify a specific time at which a packet is transmitted as a time at which the packet has its turn for transmission on the output network l/F.
  • the output network l/F is available since no other packets are being transmitted, and the packet’s priority is higher than the priorities of other competing packets.
  • the scheduling occurs whenever the outgoing interface has just finished serializing the previous packet and the scheduler now needs to look for the next packet. At this point, it looks for, e.g., the PIFO or FIFO that best meets one or more conditions as described herein, such as earlier rank and highest priority. [00152] Flowever, it is also possible to make this decision earlier than when the prior packet finishes serialization. In this approach, the scheduler picks a packet which is best for transmission at a given time, but this packet may not be best later when the prior packet has finished serializing because another, higher priority packet could have arrived in the meantime.
  • Step 504 may be responsive to a packet priority as discussed further below.
  • Step 505 includes, when Ttrans(i) is known, calculating a new minimum delay value, MinDelay(i+1 ) from LATENCY(i)-(Ttrans(i)-Tarr(i)-MinDelay(i)) as discussed in connection with FIG. 4C2.
  • Ttrans(i) is known at the moment the packet is ready to be transmitted, e.g., at the time the packet starts to be serialized.
  • Ttrans(i) can be the current time at which the packet starts to be serialized.
  • Ttrans(i) is known before the time the packet is ready to be transmitted. This can occur, e.g., when the scheduler sets a specific time in advance for transmitting the packet.
  • Step 505 can include determining a transmittal time for a data packet which is no sooner than the earliest allowable time for transmittal. This can involve, e.g., noting the time of a clock at the moment the packet is ready to be transmitted, or determining the moment at which the packet is ready to be transmitted without necessarily noting the corresponding time on the clock.
  • the LATENCY is no greater than a sum of burst sizes per packet and a maximum data packet size across the plurality of traffic flows divided by a rate at which data packets are serialized onto the output network interface.
  • the burst size refers to an amount of data that is allowed to be transmitted at the peak bandwidth rate. For example, assume each ith traffic flow has a packet size with bi bits and that m is the maximum packet size. At a given time, a packet from each traffic flow is buffered at the node while another packet is being serialized for transmission.
  • the LATENCY is the time needed to transmit a packet from each traffic flow and the packet being serialized.
  • the serialization rate (SR) is applied to ⁇ * siim(bi)+m bits, so that LATENCY£( ⁇ i siim(bi)+m)/SR.
  • MinDelay(i+1) is calculated from LATENCY(i)-(Ttrans(i)-Tarr(i)-MinDelay(i))-SerDelay(i).
  • Step 506 includes storing MinDelay(i+1 ) (or the extra delay, as described in the previous paragraph) in the information field of the packet, and transmitting the packet to the next node, Node(i+1 ), on the output network l/F. The information field is therefore updated with the new value of the minimum delay.
  • FIG. 5B is flowchart of an example process for performing steps 503 and 504 of FIG. 5A using a Push-In, First-Out (PIFO) buffer. These steps involve enforcing the earliest allowable transmittal time for the packet. See also the example implementation of FIG. 8 and the example buffer in FIG. 10.
  • Step 510 includes determining a priority of the packet based on a priority field (PRIORITY) and/or hop count index field of the packet. For example, see the fields 614, 616 and 618 in FIG. 6A. A default priority can also be used, for example when no packet fields indicate a different priority.
  • Step 511 includes selecting a PIFO buffer having a priority based on the packet’s priority. For example, the buffers in FIG.
  • the buffer 10 have priorities K through 1 , where 1 is the highest.
  • the buffer can be selected from among a plurality of available buffers having different respective priorities and corresponding optionally with different values of LATENCY. In this way, different traffic flows and/or packets can be processed with different priorities. A packet with a relatively high priority will be transmitted from the node on the output network interface sooner than a packet with a relatively low priority.
  • the scheduled dequeue time is the time at which the packet can be removed from the buffer and transmitted, if the output network interface is available.
  • the scheduled dequeue time can be stored as a time stamp associated with the packet, for example.
  • Step 513 includes, at the scheduled dequeue time, determining if the packet has priority for transmission, e.g., over other competing packets.
  • FIG. 5C is flowchart of an example process for performing steps 503 and 504 of FIG. 5A using a First-In, First-Out (FIFO) buffer. These steps involve enforcing the earliest allowable transmittal time for the packet. See also the example implementation of FIG.
  • FIFO First-In, First-Out
  • Step 520 includes determining a priority (PRIORITY) of the packet based on a priority field of the packet. For example, see the field 614 in FIG. 6A.
  • Step 521 includes selecting a FIFO buffer having a priority based on a combination of the packet’s priority and/or an input network l/F of the packet.
  • the buffers in FIG. 11 have priorities K through 1 for each of the different input network interfaces 1 through Ex.
  • the identity of the input network interface can be noted by the processor, for example.
  • Step 523 includes, at the scheduled dequeue time, determining if packet has priority for transmission.
  • FIG. 6A depicts an example format of a data packet consistent with the processes of FIG. 5A to 5C.
  • the packet 600 includes a header 610 and a payload 630.
  • the packet can follow the Internet Protocol (IP).
  • IP Internet Protocol
  • the header can include various fields, including one or more of the fields 611-619. Each field can be provided by allocating a specified number of bits. Some of the fields may not need to be used in some embodiments, while additional fields which are not shown may also be included within the packet and used.
  • the field 611 provides a source address, which is the network address of the sending node.
  • the field 612 provides a destination address, which is the network address of the destination node.
  • a field 613 provides the minimum delay which is to be implemented at the node. As discussed herein, in some embodiments the field 613 can alternatively provide a delay already experienced by the packet at the previous node, which can then be used to compute the minimum delay at the next node.
  • a field 614 provides a priority of the packet, which can be used in selecting a buffer in which the minimum delay is enforced, where the priority corresponds to a latency.
  • a field 616 provides a hop count index (he), which is a number of hops made so far by the packet since it was transmitted by the sending node. A hop refers to the packet travelling to a next node in a succession of nodes.
  • the field 617 provides a hop maximum (maxhop) which is the maximum number of hops which the packet is allowed to make.
  • a field 618 provides a table of priority vs. hop count index. For example, see FIG. 6B. This allows the priority to be customized for each node and each hop of the packet. This allows fine tuning of the effective priority over the path of the packet. For example, assume three priority levels are provided, P 1 , P2 and P3, where P1 >P2>P3. The priority of a packet could alternate between P1 and P2 at successive nodes to provide an effective priority half way between P1 and P2. For example: P1 on hop 1 and Node(i), P2 on hop 2 and Node(i+1 ), P1 on hop 3 and Node(i+2), and so forth, as depicted in FIG. 6B.
  • the table can optionally be truncated at each hop, removing information that is not necessary for subsequent hops.
  • the latency corresponds to the priority
  • the latency can also be customized for each node and each hop of the packet.
  • three latency levels Lat1 , Lat2 and Lat3 are provided which correspond to P1 , P2 and P3, respectively, where Lat1 >Lat2>Lat3.
  • the latency of a packet could alternate between Lat1 and Lat2 at successive nodes to provide an effective latency half way between Lat1 and Lat2. For example: Lat1 on hop 1 and Node(i), Lat2 on hop 2 and Node(i+1), Lat1 on hop 3 and Node(i+2), and so forth.
  • the priority and latency of a packet are fixed at each node in a series of nodes traversed by a packet.
  • a new packet when a new packet is created by a sending node, it can determine the time budget for transmitting the packet to the end node, and determine the latency and/or priority at each node accordingly.
  • one of the buffers can be selected based on the latency and/or priority of a packet.
  • a traffic flow id field 619 can be used by the sending node to identify a traffic flow to which a packet belongs. As discussed, the latency guarantee does not need to rely on this field in embodiments of the present technology.
  • the data packet can contain one or more fields that together constitute a traffic flow identifier, whose value is unique to packets of the traffic flow.
  • the payload 630 comprises data 631. The data is used by an application and can have a fixed or varying length in different packets.
  • FIG. 6B depicts an example format of the table field 618 of the packet of
  • the table includes values for hop count index (he), e.g., 1 , 2 and 3, and corresponding value for priority, P1 , P2 and P1 , respectively, as discussed previously.
  • one or more processors execute instructions to determine the PRIORITY based on a sequence of per-hop priority information fields, e.g., P1-P3, and a hop count index (field 616) in the packet.
  • the value of the index after receiving the packet determines the index of the item in the sequence of priorities which is then used as the PRIORITY value.
  • the hop count index is incremented in the packet before sending the packet to the next hop.
  • the node can alternatively be pre-configured, e.g., before the packet is received, with some of the information of the fields of the packet, such as the table
  • FIG. 7 depicts an example of processes performed by Node(i) and Node(i+1 ) of FIG. 3A, consistent with FIG. 5A.
  • the packet is provided on the input l/F 410a (FIG. 3A).
  • the process 702 receives and forwards the packet.
  • the receiving can include deserializing the packet, as discussed.
  • the forwarding can involve routing functions such as looking up the address of a next node to which the packet is to be transmitted and updating a corresponding field in the packet.
  • Step 703 is performed after step 702 in this example. Flowever, this could be in a different order.
  • step 702 can be performed on a separate processor, such as on an ingress line card, whereas the remainder of the processing at the node at steps 703 and the following steps is performed on an egress line card.
  • step 703 By performing step 703 after step 702, only the egress line card has to be modified to implement the techniques discussed herein.
  • the ingress and egress line cards may have separate clocks, so that using the clock on the egress line card to obtain both Tarr(i) in step 703 and Ttrans(i) in step 706 provides more consistency.
  • step 702 introduces a significant variable delay, such as an unknown but bounded latency of transmitting the packet from the ingress to the egress line card, then it can be beneficial to perform step 703 in the ingress line card before step 702 instead.
  • the variable delay is compensated for by step 704, which is performed relative to the step 703 timestamp. This assumes the same clock is used for steps 703 and 706.
  • An apparatus which is provided for use with a node in a network as described herein can comprise an egress card alone or in combination with an ingress card, in one approach.
  • the process 704 extracts and enforces the minimum delay, MinDelay(i), represented by a parameter pak.delay which can be included in or determined from fields in the received packet.
  • the process 705 provides a possible additional delay, as discussed.
  • the transmitting can include serializing the packet on the output l/F 430, as discussed.
  • the packet 710 including pak.delay, is thus provided on the link L2 (see also FIG. 3A) between the nodes.
  • Node(i+1 ) performs similar processing as Node(i) but using the new value of pak.delay.
  • the packet is provided on the input l/F 721 .
  • the process 722 receives and forwards the packet.
  • the clocks can be unsynchronized in terms of time, phase and frequency. This avoids the overhead costs involved when such synchronization is necessary, such as in the Large-Scale Deterministic IP Network of the Internet Engineering Task Force (IETF).
  • Clocks in a network can be synchronized using various techniques.
  • One technique is the Precision Time Protocol (PTP) in which clocks synchronize with a master clock through the exchange of network messages.
  • PTP Precision Time Protocol
  • each node can forward packets in a time slot based on cycle identifiers carried in the packets.
  • cycle identifiers carried in the packets.
  • one or more processors execute instructions to determine Ttrans and Tarr based on a clock whose timestamps are of only local significance, e.g., within the node.
  • the clock can start to count, e.g., from 0, from the time the node is powered on.
  • the process 724 extracts and enforces the minimum delay, MinDelay(i+1 ), represented by a parameter pak.delay.
  • the process 725 provides a possible additional delay, such as depicted in FIGS. 4A-4D.
  • the transmitting can include serializing the packet on the output l/F 728, as discussed.
  • LATENCY(i) can be equal to the sum of: (a) the processing time of the packet at Node(i) after the process 704 up until the packet is transmitted at step 707, and (b) the processing time of the packet at Node(i+1 ) at steps 722-724. These are corresponding processing points of the nodes at the respective earliest allowable times for transmittal of the packet, e.g., Tearliest(i) and Tearliest(i+1 ).
  • LATENCY(i) can include the serialization time.
  • FIG. 8 depicts an example of processes performed by Node(i) of FIG. 3A, consistent with FIG. 5A, 5B and 6A, where PIFO buffers are used. See also FIG. 10.
  • the packet is provided on the input l/F 410a.
  • the process 702 receives and forwards the packet.
  • the process 801 extracts the minimum delay, MinDelay(i), represented by pak.delay (field 613 in FIG. 6A), the hop maximum field (maxhop, field 617 in FIG. 6A), the hop count index (he, field 616 in FIG. 6A) and the table of he vs. priority (hopprio, table field 618 in FIG.
  • the process 802 delays and schedules the packet with a set of per-priority PIFO queues.
  • the process 803 provides further details of the processes 802 and 707.
  • the transmitting can include serializing the packet on the output l/F 430.
  • a next command (lf(pak.hc ⁇ pak.maxhop) pak.hc++) increments the hop count index by one hop if it has not reached the maximum allowable number of hops, maxhop.
  • the packet is then enqueued into pifo[k], denoting a PIFO which corresponds to the priority k, with a rank denoted by pak.rank.
  • a particular kth PIFO buffer is selected from among a plurality of available PIFO buffers based on the priority.
  • a number K of PIFO buffers are depicted ranging from PIFO K having a lowest priority and a highest maximum latency, pifo[K].max, to PIFO 1 having a highest priority and a lowest maximum latency, pifo[1].max.
  • the packets are dequeued in the order of their rank.
  • the rank can represent a timestamp at which the packet can be dequeued.
  • the packet at the head of the PIFO is always the next timestamp that can be dequeued.
  • a packet is dequeued from an nth PIFO buffer, pifo[n] when three conditions are met. First, the output interface is free to take another packet, such that there is no other packet currently being sent (“output l/F free”). Second, the rank of the PIFO is smaller than the current timestamp “pifo[n] head.
  • rank ⁇ tnow which means the timestamp, Tearliest, of the first or head packet in the PIFO is equal to or older than the current timestamp. In other words, the earliest transmission time has been reached for the packet at the head of the PIFO.
  • This head packet of a PIFO is the packet which is next to be released or dequeued from the PIFO.
  • the third condition is that there is no other higher priority PIFO in which the earliest transmittal time has been reached for the head packet. This is denoted by the command: “for all j ⁇ n: pifo[j]. head. rank > tnow OR empty(pifo[j).”
  • packets are taken from pifo[1 ], which has the highest priority, and from lower priority PIFOs, such as pifo[2], pifo[3], ..., only when pifo[1] does not have a packet ready to be transmitted.
  • the term “pifo[i].max” can be a pre-configured maximum intentional delay time through pifo[i]. Specifically, for the highest priority PIFO buffer, PIFO[1], each traffic flow has a burst size in bits. The maximum latency for these flows is LATENCY(i).
  • SchedDelay(i) is the sum of these burst sizes divided by the link rate, that is, the time it would take if all flows would burst at the same time to then send all these bursts consecutively. This can be expressed by:
  • MAX(1) sum(bursts of flows priority[1 ]) / l/F bit rate
  • MAX(j) MAX(j-1) + sum(bursts of flows prio[j] / l/F bit rate.
  • one or more processors execute instructions to delay the data packet through a timed Push In, First Out (PIFO) queue, where: a) a PIFO rank of the packet is the earliest time of transmittal corresponding to a sum of the arrival time and the minimum delay extracted from the information field in the data packet, and b) the head of the PIFO buffer is served no earlier than its rank.
  • PIFO timed Push In, First Out
  • the one or more processors execute instructions to: a) utilize a separate timed PIFO for each value of PRIORITY as defined in FIG. 6A, and b) select the PIFO into which to insert the packet from the PRIORITY, where c) a PIFO rank of the packet is the earliest time of transmittal, corresponding to a sum of the arrival time and the minimum delay extracted from the information field in the data packet, and d) the head of each timed PIFO is served no earlier than its rank, when there is no timed PIFO of higher priority whose head can be served.
  • FIG. 9 depicts an example of processes performed by Node(i) of FIG. 3A, consistent with FIG. 5A, 5C and 6A, where FIFO buffers are used. See also FIG. 11.
  • the packet is provided on the input l/F 410a.
  • the process 702 receives and forwards the packet.
  • the process 901 extracts the minimum delay, MinDelay(i), represented by pak.delay (field 613 in FIG. 6A) and the priority field (field 614 in FIG. 6A).
  • the process 902 delays and schedules the packet with a set of per-priority FIFO queues.
  • the process 903 provides further details of the processes 902 and 707.
  • the transmitting can include serializing the packet on the output l/F 430.
  • the packet 710b is thus provided on the link L2 with fields including 1 ) pak.delay, and 2) priority data.
  • the packet is then enqueued into fifo[k,e], denoting a FIFO which corresponds to the priority k and the input l/F e, with a rank denoted by pak.rank.
  • a number Ex of FIFO buffers are available.
  • a number K of FIFO buffers are depicted.
  • a FIFO (K, 1 ) has a lowest priority, priority[K], and a highest maximum latency
  • a FIFO (1 ,1 ) has a highest priority, priority[1], and a lowest maximum latency.
  • a FIFO (K,Ex) has a lowest priority, priority[K], and a highest maximum latency
  • a FIFO (1 ,Ex) has a highest priority, priority[1], and a lowest maximum latency.
  • the packets are dequeued in the order of their rank, as discussed.
  • a packet is dequeued from a fifo[k,e] when three conditions are met.
  • the output interface is free to take another packet (“output l/F free”).
  • the rank of the FIFO is smaller than the current timestamp “fifo[k,e] head.
  • rank ⁇ tnow which means the timestamp, Tearliest, of the first or head packet in the FIFO is equal to or older than the current timestamp. In other words, the earliest transmittal time has been reached for the packet at the head of the FIFO.
  • the third condition is that there is no other higher priority FIFO in which the earliest transmittal time has been reached for the head packet. This is denoted by the command: “for all j ⁇ k: fifo[j, * ]. head. rank > tnow
  • the term “priority[pak.priority].max” can be a pre configured maximum intentional delay time through fifo[k,e]. Specifically, for the highest priority FIFO, FIFO[1 ,Ex], each traffic flow has a burst size in bits. The maximum latency for these flows is LATENCY(i).
  • the scheduling delay, SchedDelay(i) is the sum of these burst sizes divided by the link rate.
  • one or more processors execute instructions to enqueue the data packet into a first in, first out (FIFO) buffer with a scheduled dequeue time corresponding to a sum of the arrival time and the minimum delay extracted from the information field in the data packet.
  • FIFO first in, first out
  • the one or more processors further execute the instructions to: a) utilize a separate timed FIFO for each combination (PRIORITY, INTERFACE) of the possible values of PRIORITY and a valid incoming INTERFACE index, and b) select the FIFO into which to insert the packet from the PRIORITY and the interface from which the packet was received, where c) the FIFO rank of the packet is the earliest time of transmittal, corresponding to a sum of the arrival time and the minimum delay extracted from the information field in the data packet, and e) a head of a timed FIFO is served when: i) the head of the FIFO is equal or earlier when the current time is equal or later then the rank of the head of the FIFO, ii) There is no FIFO whose PRIORITY is higher and that can be served, and iii) there is no FIFO of equal PRIORITY whose head’s rank is earlier.
  • FIG. 10 depicts an example of PIFO buffers, consistent with FIG. 8.
  • a Push-In, First-Out (PIFO) buffer or queue is a priority queue data structure that allows packets to be "pushed" or enqueued into an arbitrary location in the queue, but only dequeued from the head of the queue.
  • a packet is required to be enqueued at the bottom of a First-In, First-Out (FIFO) queue, and dequeued from the head of the queue.
  • PIFOs provide greater flexibility in packet scheduling processes, while FIFOs can be more easily implemented using existing hardware.
  • a set of K buffers 1000 including PIFO[K] through PIFO[1], are depicted.
  • Each buffer has a respective input path 1003_K to 1003_1 connected to a multiplexer 1002 and a respective output path 1020_K through 1020_1 connected to a demultiplexer 1030.
  • the multiplexer can route a packet on an input path 1001 to one of the buffers. For example, as discussed in connection with FIG. 8, for a given input packet, one queue among K PIFO queues can be selected based on the packet’s priority field.
  • Each buffer can store a number of packets.
  • PIFO[1] stores packets 1010-1017.
  • a new packet received on the input path of a buffer can be stored in any desired location relative to the current packets in the buffer, as represented by the arrow 1004 for PIFO[1].
  • a head packet 1017 is output on the output path 1020_1 to the demultiplexer 1030 for transmission on an output path 1031 .
  • the demultiplexer can select one of the PIFO buffers at a given time.
  • FIG. 11 depicts an example of FIFO buffers, consistent with FIG. 9.
  • a packet is required to be added to the bottom or tail of a FIFO queue, and dequeued from the head of the queue.
  • the multiplexer can route a packet on an input path 1101 to one of the buffers. For example, as discussed in connection with FIG. 9, for a given input packet, one queue among the K x Ex FIFO queues can be selected based on the packet’s priority field and the identity of the input l/F. [00215] Each buffer can store a number of packets. For example, PIFO[1] stores packets 1110-1117.
  • a new packet received on the input path of a buffer is stored at the tail location of the buffer, as represented by the arrow 1103_[K,Ex] for FIFO[K,Ex]
  • a head packet 1117 is output on the output path 1120_[K,Ex] to the demultiplexer 1130 for transmission on the output path 1131.
  • the demultiplexer can select one of the FIFO buffers at a given time.
  • processor readable storage devices can include computer readable media such as volatile and non-volatile media, removable and non removable media.
  • computer readable media may comprise computer readable storage media and communication media.
  • Computer readable storage media may be implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Examples of computer readable storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
  • a computer readable medium or media does not include propagated, modulated, or transitory signals.
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a propagated, modulated or transitory data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as RF and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
  • some or all of the software can be replaced by dedicated hardware logic components.
  • illustrative types of hardware logic components include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), special purpose computers, etc.
  • FPGAs Field-programmable Gate Arrays
  • ASICs Application-specific Integrated Circuits
  • ASSPs Application-specific Standard Products
  • SOCs System-on-a-chip systems
  • CPLDs Complex Programmable Logic Devices
  • special purpose computers etc.
  • software stored on a storage device
  • the one or more processors can be in communication with one or more computer readable media/ storage devices, peripherals and/or communication interfaces.
  • a connection may be a direct connection or an indirect connection (e.g., via one or more other parts).
  • the element when an element is referred to as being connected or coupled to another element, the element may be directly connected to the other element or indirectly connected to the other element via intervening elements.
  • the element When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element.
  • Two devices are “in communication” if they are directly or indirectly connected so that they can communicate electronic signals between them.
  • the term “based on” may be read as “based at least in part on.”
EP21713836.1A 2021-01-19 2021-03-01 Auf garantierter latenz basierende weiterleitung Pending EP4264892A2 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163139118P 2021-01-19 2021-01-19
PCT/US2021/020246 WO2021119675A2 (en) 2021-01-19 2021-03-01 Guaranteed latency based forwarding

Publications (1)

Publication Number Publication Date
EP4264892A2 true EP4264892A2 (de) 2023-10-25

Family

ID=76330845

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21713836.1A Pending EP4264892A2 (de) 2021-01-19 2021-03-01 Auf garantierter latenz basierende weiterleitung

Country Status (3)

Country Link
EP (1) EP4264892A2 (de)
CN (1) CN116783876A (de)
WO (1) WO2021119675A2 (de)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102023104025A1 (de) * 2022-02-18 2023-08-24 Hirschmann Automation And Control Gmbh Paketverzögerungen für zeitdeterministische Firewalls
DE102022118267B3 (de) * 2022-07-21 2024-01-25 Rockwell Collins Deutschland Gmbh Synchronisiertes Datennetzwerk-System und Verfahren zum Initialisieren sowie zum Synchronisieren desselben
CN117955928A (zh) * 2022-10-20 2024-04-30 中兴通讯股份有限公司 报文调度方法、网络设备、存储介质及计算机程序产品

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10560383B2 (en) * 2016-11-10 2020-02-11 Futurewei Technologies, Inc. Network latency scheduling
US10798012B2 (en) * 2017-10-30 2020-10-06 Cisco Technology, Inc. Jitter elimination and latency compensation at DetNet transport egress

Also Published As

Publication number Publication date
CN116783876A (zh) 2023-09-19
WO2021119675A3 (en) 2021-07-22
WO2021119675A2 (en) 2021-06-17

Similar Documents

Publication Publication Date Title
EP4264892A2 (de) Auf garantierter latenz basierende weiterleitung
US11432223B2 (en) Methods and apparatuses for selecting a first base station or a second base station to transmit a packet data unit (PDU) to a user equipment (UE)
US20240129869A1 (en) Time-Synchronized Radio Bearer For Supporting Precision Timing Protocol (PTP) Based Time Sensitive Network (TSN) Applications
US8493915B2 (en) Method for synchronizing between a gateway and base stations and corresponding gateway and base stations
US20230138249A1 (en) Apparatuses, devices and methods for a wireless network access device, a network gateway device, a wireless communication device and for a network device
CN111357318B (zh) 用于不同数据分组流之间的同步的方法和装置
US10560383B2 (en) Network latency scheduling
US11362959B2 (en) Latency based forwarding of packets with destination policies
WO2019196810A1 (zh) 一种数据传输的方法、相关设备及通信系统
CN110546926B (zh) 减少时间敏感分组的分组延迟变化
US9515940B2 (en) Method for transmitting data in a packet-oriented communications network and correspondingly configured user terminal in said communications network
CN113366805A (zh) 报文调度方法、调度器、网络设备和网络系统
WO2020191013A1 (en) Latency based forwarding of packets for service-level objectives (slo) with quantified delay ranges
WO2022116939A1 (zh) 数据传输方法、接入网设备、用户面功能网元和存储介质
US10361962B2 (en) Packet processing technique for a communication network
CN108847919B (zh) 一种数据传输的方法、基站和无线通信设备
US20220014485A1 (en) Signalling of Dejittering Buffer Capabilities for TSN Integration
EP3032785B1 (de) Transportverfahren in einem kommunikationsnetz
CN111066272B (zh) 移动无线电接入网络中的分组延迟减少
EP3977692B1 (de) Vermeidung von jitter in einem kommunikationssystem
CN116325975A (zh) 用于无线通信系统的使用突发扩展参数的资源配置
KR101681613B1 (ko) 분산 병렬 데이터 전송 시스템에서 자원들을 스케줄링하기 위한 장치 및 그 방법
Liu et al. An optimal design of time division multiple access protocol for data link network
EP3731575B1 (de) Verfahren zur taktsynchronisation von protokollen sowie netzwerk, basisstation und benutzergeräte
JP2013106088A (ja) 通信装置

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230720

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)