WO2024007334A1 - A device and methodology for hybrid scheduling using strict priority and packet urgentness - Google Patents

A device and methodology for hybrid scheduling using strict priority and packet urgentness Download PDF

Info

Publication number
WO2024007334A1
WO2024007334A1 PCT/CN2022/104736 CN2022104736W WO2024007334A1 WO 2024007334 A1 WO2024007334 A1 WO 2024007334A1 CN 2022104736 W CN2022104736 W CN 2022104736W WO 2024007334 A1 WO2024007334 A1 WO 2024007334A1
Authority
WO
WIPO (PCT)
Prior art keywords
classes
class
data packets
transmission
indication
Prior art date
Application number
PCT/CN2022/104736
Other languages
French (fr)
Inventor
Siyu Tang
Guanhua ZHUANG
Mohamed ELATTAR
Huajie Bao
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to PCT/CN2022/104736 priority Critical patent/WO2024007334A1/en
Publication of WO2024007334A1 publication Critical patent/WO2024007334A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/564Attaching a deadline to packets, e.g. earliest due date first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Definitions

  • This invention relates to an apparatus and method for scheduling network packets by employing a hybrid scheduling approach that introduces an additional parameter of urgentness or weighting/quantum values that must be considered alongside strict priority when scheduling data packets for transmission.
  • An alternate way to address the issue of scheduling currently known is to employ algorithms that are specifically designed for providing an end-to-end (E2E) latency guarantee.
  • E2E end-to-end
  • Examples of such algorithms that are known include RSCP/WFQ/VC/Jitter-EDD/UBS.
  • the scheduling optimality of these schemes is usually achieved at the cost of increased implementation complexity (e.g., per-flow queuing, sorted queues, a large amount of FIFO queues, maintenance of flow state, or time/frequency synchronization) , and as such cannot be supported by existing commodity hardware.
  • Paternoster scheduling scheme was proposed to the IEEE 802.1 community.
  • Paternoster is a simple real-time packet bandwidth reservation, policing, queuing, and transmission scheduling algorithm that provides bounded end-to-end delays without requiring clock synchronization between network nodes. This was further developed to include the concept of multiple cyclic queuing and forwarding and is a variant of the paternoster scheduling algorithm previously proposed.
  • the Paternoster scheduling algorithm is compatible with a Differentiated Services architecture where packets are classified into 8 priority classes. Unlike traditional Differentiated Services architecture (where a single queue is associated with each priority class) , the Paternoster scheduling algorithm defines 4 output queues that are associated with each priority class on each output port, see figure 1.
  • Each priority class is associated with a unique epoch time. Queuing of data packets to be transmitted is performed in accordance to the bandwidth reservation for each priority class. During each epoch of data transmission, only the prior queue can transmit data packets. The prior queue is used for transmission only and it does not receive packets. Instead, received data packets that are associated with any given reservation are added to the “current” queue. This continues for additional received data until the addition of a data packet would exceed that reservation’s bandwidth allocation for an epoch. In such a case, the data packets that would exceed the reservation’s bandwidth are added to the next and last queues respectively until each of their reservation’s bandwidths are also full.
  • FIG. 2 An example of queue rotation utilised in Paternoster scheduling is illustrated in figure 2.
  • the “current” queue becomes the “prior” queue
  • the “next” and “last” queues (and their remaining allocation for each reservation) become “current” , and “next” respectively.
  • the previous prior queue (which should now be empty because the data packets have been transmitted in the previous epoch) becomes the new “last” queue.
  • This Paternoster operation repeats at each epoch such that the four queues alternate during each epoch.
  • the packets from the different priority classes within each queue are scheduled in accordance with a strict priority scheduling scheme that is intrinsic to each class.
  • Fig. 4 illustrates using multiple values for the epoch of each class ⁇ i on a single output port. This requires a common base of epoch times between classes in order to segment the total time in a uniform manner. In an example wherein the cycle times of two classes is 200 microsecond and 125 microseconds they cannot both be accommodated using multiple epoch values as there is no common base.
  • Fig. 4 illustrates the case where four values of epoch values are run simultaneously as in figure 3. In the example of figure 4, the epoch time for class 6 runs at the highest priority, e.g., has the shortest epoch and can be called for transmission the fastest. The classes are shown on the right hand side of figure 4.
  • the epochs of class 5 in figure 4 are called four times slower than those of class 6 as demonstrated by one epoch of class 5 taking the same time to call as four epochs of class 6 and therefore the buffer of class 5 runs at a priority 5 with is less than the priority of class 6 (which is 6) .
  • the epochs of class 4 are similarly slower than those of both classes 5 and 6, and are slower than those of class 6 by a factor of 8.
  • class 3 which is a factor of 24 slower than that of class 6.
  • the letters present in figure 4 represent which buffer of the class is output during each cycle (epoch) . There may be 9 buffers, “a” through to “i” (buffer i is not shown) .
  • priority 6 uses three buffers due to the short epoch times and the other priorities use two each as they have longer epoch times.
  • the cycle times requirements of different vendors may be based on their technology, which cannot be changed without substantial effort. These requirements may be based on hardware dependencies that are independent of the capabilities of the communication part of the device.
  • Servo drives from different vendors (A and B) may be working as part of the same network and for specific reasons the vendors may be limited in the choice of the period for the control loop.
  • the cycle time from Servo A1 to Servo A2 is 31.25 ⁇ s.
  • a least common divisor does not exist between the cycle times from Vendor A and B. In other words, the cycle times for one vendor is not a multiple of the cycle time of the other vendor and therefore the cycle time cannot be uniformly broken down into epochs that suit all vendors.
  • the present disclosure is therefore designed to solve the deadline violation issue described above.
  • the general model described in this disclosure to configure network epochs is more flexible than the state of the art and supports a wider range of application cycle times and achieves lower implementational complexity.
  • a data processing apparatus for implementing a hybrid scheduling scheme for scheduling data packets in a network, the data processing apparatus comprising one or more processors configured to process data packets, each data packet belonging to a class and each class having a predetermined first transmission priority associated therewith: receive, for one or more of a plurality of classes, data packets of data to be transmitted; calculate a second indication of the transmission priority of each of the plurality of classes, schedule the data packets of the plurality of classes for transmission in dependence on the first transmission priority and the calculated second indication of the transmission priority.
  • a data processing apparatus as described above wherein the second indication of the transmission priority is a calculated urgentness value based on a maximum latency of each of the plurality of classes, a current transmission epoch time and a start time for a transmission epoch for each of the plurality of classes.
  • This provides a means to supersede the strict priority associated to each class using a secondary metric that is based on the need to meet the transmission deadline and avoid deadline violation.
  • a data processing apparatus as described above, wherein the one or more processors is further configured to calculate a maximum number of data packets that each of the plurality of classes transmits is given by the expression (C . ⁇ i . ⁇ 1 ) /s, wherein C is a transmission link capacity, ⁇ i is the weight or quantum value of class i, ⁇ 1 is the epoch of class 1 and s is an average data packet size. This provides a means for determining the correct number of data packets that can be transmitted in each epoch.
  • This provides a means for transmitting the maximum number of data packets in each class in each epoch thus allowing transmission deadlines for data packets in each class to be met.
  • a data processing apparatus as described above, wherein the maximum number of data packets transmittable for each of the plurality of classes is determined a time at which a transmission epoch of the class with a highest first transmission priority begins. This provides a means for ensuring the maximum number of data packets that can be transmitted in each class in each epoch is updated based on the current bandwidth capacity and the class priority.
  • a data processing apparatus as described above, wherein the one or more processors is configured to assign an epoch time in which to transmit the data packets of a class to each of the plurality of classes. This ensures that each class has a designated time within which to have data packets scheduled and to transmit said data packets.
  • the second indication of the transmission priority is a calculated urgentness value based on a maximum latency of each of the plurality of classes, a current transmission epoch time and a start time for a transmission epoch for each of the plurality of classes.
  • the method step of determining a schedule for transmission of data packets of the plurality of classes comprises: comparing the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes to determine whether they are the same; then if the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes are the same, scheduling transmission of the data packets of the plurality of classes based on the first transmission priority of each of the plurality of classes; or if the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes are not the same, scheduling transmission of the data packets of the plurality of classes based on the second indication of the transmission priority of each of the plurality of classes.
  • This provides a means to supersede the strict priority associated to each class using a secondary metric that is based on the need to meet the transmission deadline and avoid deadline violation.
  • the second indication of the transmission priority is a weight or quantum value assigned to each of the plurality of classes. This provides an alternate means of avoiding the transmission violation of data packets using an alternate metric to the urgentness.
  • weight or quantum value assigned to each of the plurality of classes allocates a transmission bandwidth for each of the plurality of classes. This provides sufficient bandwidth for transmission of the data packets within each class such that transmission deadline violation does not occur.
  • C is a transmission link capacity
  • ⁇ i is the weight or quantum value of class i
  • ⁇ 1 is the epoch of class 1
  • s is an average data packet size.
  • the method step of determining a schedule for transmission of data packets of the plurality of classes comprises: comparing a number of data packets to be transmitted in each of the plurality of classes to the maximum number of data packets transmittable for each of the plurality of classes; and if the number of data packets to be transmitted in each of the plurality of classes is less than or equal to the maximum number of data packets that can be transmitted for a class, then scheduling all the data packets of that class for transmission, if the number of data packets to be transmitted in each of the plurality of classes is more than the maximum number of data packets that can be transmitted for a class, scheduling the maximum number of data packets that can be transmitted for each class for transmission.
  • This provides a means for transmitting the maximum number of data packets in each class in each epoch thus allowing transmission deadlines for data packets in each class to be met.
  • each of the plurality of classes is assigned an epoch time in which to transmit the data packets of that class. This ensures that each class has a designated time within which to have data packets scheduled and to transmit said data packets.
  • Fig. 1 illustrates an architecture of a Paternoster scheduling scheme.
  • Fig. 2 illustrates the timing and queuing in a Paternoster scheduling scheme.
  • Fig. 3 illustrates an example of class data packet scheduling in a Paternoster scheduling scheme
  • Figs. 3a to 3d illustrate the scheduling priority of data packets and the occurrence of deadline violation in four classes in four epochs of a Paternoster scheduling scheme
  • Fig. 4 illustrates how multiple different epoch values may be applied to a plurality of classes
  • Fig. 5 illustrates a use-case where drivers have applications that do not have common epoch times
  • Fig. 6 illustrates a general way to configure the epochs of service classes for the use-case where application cycle times do not have a common base
  • Fig. 7 illustrates an example of class data packet scheduling using a hybrid parameter scheduling scheme that includes urgentness
  • Figs. 7a to 7d illustrate the scheduling priority of data packets in four classes in four epochs of a hybrid scheduling scheme of the first embodiment of this disclosure
  • Fig. 8a illustrates a simulation result of the transmission delay for each class using Omnest for a scheduling scheme that relies on strict priority only;
  • Fig. 8b illustrates a simulation result of the transmission delay for each class using Omnest for a hybrid scheduling scheme that is based on strict priority and an urgentness parameter
  • Fig. 8c illustrates a simulation result of the service time deviation for each class using Omnest for a scheduling scheme that is based on strict priority only;
  • Fig. 8d illustrates a simulation result of the service time deviation for each class using Omnest for a hybrid scheduling scheme that is based on strict priority and a second indication of the second embodiment of this disclosure
  • Fig. 9 illustrates an example of class data packet scheduling using a hybrid parameter scheduling scheme that includes assigning weights to each class
  • Figs. 9a to 9d illustrate the scheduling priority of data packets in four classes in four epochs of a hybrid scheduling scheme of the second embodiment of this disclosure.
  • the embodiments of this disclosure address the issues discussed in past scheduling methods.
  • the proposed HSSUW (Hybrid Scheduling using Strict priority and packet Urgentness or transmission Weight) scheme is a methodology that is able to support latency guarantee for any type of epoch configuration. It provides more flexibility for cycle time configuration at the end-station in an industrial networking environment without needing to use the previously demanded rule of “multiple integers” between network and application cycles as described above.
  • Fig. 3 of the present disclosure illustrates an example of scheduling data packets for transmission using only the strict priority associated with each class.
  • Each class is assigned an epoch time ⁇ i (where i is the priority class) .
  • Lower priority classes use a larger epoch, such as those of class 4, while higher priority classes use a smaller epoch, i.e., ⁇ 1 ⁇ ... ⁇ 8 , where class 1 has the higher priority as defined in this disclosure.
  • Each priority class on each egress port is configured with four queues, prior, current, next, and last. During each epoch, only queue prior is transmitting packets, the other three queues are receiving packets. The queues are configured to rotate, and this occurs at the beginning of every epoch.
  • the link capacity C is 1 Gbps
  • the epoch ⁇ i per class i and the packet size is s, which is assumed to be 1000 bits.
  • a further assumption in this example is that the number of data packets that can be transmitted per epoch is 10.
  • each class has an associated epoch, in this case, epochs of different lengths of time, for example class 1 data has an epoch of 10 microseconds.
  • the data processing apparatus may determine how many packets of data can be transmitted in each epoch. This is further demonstrated in figures 3a to 3d, which shows the four consecutive epochs used to transmit packets of data.
  • the maximum data packets transmittable by each class are transmitted in each epoch.
  • the maximum number of data packets of class 1 that can be transmitted is two, for class two the maximum number of data packets is four, for class three the maximum number of data packets transmittable is nine and for class four the maximum number of data packets that can be transmitted is twelve.
  • the maximum number of data packets per class are transmitted in each epoch as seen in figure 3a. As shown in this figure, this leads to two data packets from class 1 being scheduled, four data packets from class 2 being scheduled and four data packets from class 3 being scheduled for transmission. As only 10 data packets can be transmitted per epoch in this example and the data packets are scheduled based on strict priority, the data packets from class 4 are not scheduled in the first epoch. In figure 3b, two more data packets from class 1 are scheduled for transmission, but since all four data packets that can be scheduled from class 2 have been scheduled for transmission and transmitted in the previous epoch of figure 3a, no further class 2 data packets can be scheduled at this stage. As such there is more capacity for class 3 and 4 data packets that have not yet been scheduled. Therefore, since in the example there are five class 3 data packets remaining, they are scheduled for transmission and the remainder of the 3 slots are allocated to class 4 data packets.
  • a further parameter has been added to specify the urgentness of data packets in addition to the strict priority already associated with the data packets of each class.
  • data packet scheduling between different classes is based on dual-parameters urgentness and priority.
  • the urgentness associated with each class may be based on the maximum latency of each of the plurality of classes. In other words, the maximum time within which data packets from a class can be transmitted and meet the transmission deadline.
  • the urgentness parameter ⁇ i that is assigned to each class is defined as
  • This new parameter of urgentness ⁇ i is an example of a second indication of the transmission priority (second transmission priority) that is assigned to each class and used in addition to the strict priority p i of each class (traffic class) during scheduling.
  • the strict priority of each class is an example of a first transmission priority (first indication of the transmission priority) .
  • the parameters ⁇ i , p i form a dual-parameter that can be used by to schedule the data packets to be transmitted.
  • the apparatus may compare the second indications of each class and the first indications of each class.
  • the one or more processors of the apparatus may compare the second indication, e.g., the urgentness ⁇ i , of each class to each other and determine that if the urgentness ⁇ i of two classes is the same, the data packets of those classes will be scheduled based on the first transmission priorities e.g., the strict priorities p i .
  • the one or more processors or a scheduler are configured to overwrite the first transmission priorities and schedule the data packets of the class with the lower second indication ahead of the data packets of the class with the higher second indication, regardless of the strict priorities of each class.
  • the urgentness parameter would overwrite the strict priority parameter, e.g., the second indication would overwrite the first indication of transmission priority.
  • the lower the urgentness parameter the more urgent it is to transmit the data packet as the epoch for the class that contains the data packet started the greatest time ago and thus normally has the shortest time remaining to be scheduled and transmitted in order to meet the transmission deadline.
  • Fig. 7 illustrates an example of the hybrid scheduling scheme of this disclosure that introduces the second indication of urgentness in addition to the strict priority scheme that is the basis for Paternoster scheduling. It should be understood that the number of classes, epoch times etc of figure 7 are used by way of example only to demonstrate how the data processing apparatus determines which data packets to schedule for transmission. The configuration of the network epochs, bandwidth reservation and the data packet sizes are the same as those described above in relation to figure 3.
  • the first three epochs proceed in the same manner as those described above in relation to figure 3a to 3c as the urgentness values are the same, however the fourth network epoch differs from that described above in relation to figure 3d, in particular in how the class 4 data packets are scheduled in order to avoid deadline violation.
  • the epoch of ⁇ 4 of the class 4 data packets started 30 ⁇ s ago.
  • the one or more processors will then schedule which of the class 4 or class 3 data packets will be scheduled first based on these calculations of the second indication of each and the strict priority of each class. In this case, the calculated second indication of class 4 is determined to be smaller than that of class 3 discussed above.
  • class 4 data packets are scheduled before the transmission of class 3 packets in the fourth epoch in figure 9d. This ensures that all the data packets are delivered before their deadlines and no deadline violation occurs.
  • the one or more processors of the data processing apparatus would schedule the class 3 data packets before the class 4 data packets based on the first indication (strict priority) of each class.
  • Fig. 8a to 8d illustrates a simulation produced using Omnest that demonstrates the results of employing a data processing apparatus that schedules data packets for transmission using only a first transmission priority (figures 8a and 8c) and with both a first transmission priority (strict priority) and a second transmission priority (urgentness) as in figures 8b and 8d.
  • dotted line represents a threshold at which deadline violation occurs.
  • the bounds of each of the blocks seen in the figures represent serve time deviation (jitter) , which is defined as the requested latency of the network minus the actual delivery time.
  • a deadline violation will result in a negative jitter as the requested latency (period to transmit the data packet) is less than the actual time taken to transmit the data packet e.g., in class 4, and thus the data packets from this class will miss their transmission deadline because they were not scheduled appropriately.
  • the original Paternoster scheduling algorithm that uses only strict priority not only causes deadline violation for class 4 data packets, but incurs larger service time deviation (e.g., jitter) as class 4 data packets are not scheduled based on strict priority in a timely manner and must wait for additional epochs for them to be transmitted.
  • FIG 8c which demonstrates the negative service time deviation of class 4 data packets that arises because the data packets were not scheduled and transmitted within the requested latency of the network class and thus the deadline for transmitting these data packets was missed.
  • the class 4 packets are not only delivered within their deadlines but experience less jitter.
  • figures 8b and 8d which do not demonstrate any deadline violation in the scheduling and transmission of the class 4 data packets as evidenced by the reduced millisecond delay boundaries for each of the classes.
  • the service time deviation of each of the classes is positive, particularly that of class 4 data packets signifying that the data packets are scheduled and transmitted within the deadline and no transmission deadline violation occurs when scheduling is performed based on the urgentness parameter and the strict priority.
  • the one or more processors of the data processing apparatus is configured to implement a hybrid scheduling scheme in which a second indication of the transmission priority is a weight ⁇ i assigned to each class i.
  • This scheduling implementation is based on either data packets or bit-level.
  • a weight ⁇ i (second indication of transmission priority) is assigned to each class i.
  • data packet scheduling is performed in accordance to ⁇ 1 , the epoch of service class 1.
  • Figs. 9 and 9a to 9d illustrate the transmission order of data packets in each epoch using the approach of the second embodiment in which the second indication includes a calculation based on the weight assigned to each class.
  • the weight assigned to each class may allocate a transmission bandwidth to each class within which the data packets can be scheduled and/or transmitted.
  • WRR Weighted Round Robin
  • DRR Deficit Round Robin
  • ⁇ i may be a variable depending on the dynamic assignment of data packet priority on each hop. The hop is the hopcount that is defined as the number of intermediate networking nodes between a source and destination pair.
  • the weight ⁇ i (or quantum value) defines the fraction of bandwidth reservation allocated to class i data packet traffic.
  • the one or more processors is configured to calculate a maximum number of data packets that each of the plurality of classes transmits during ⁇ 1 is given by the expression (C . ⁇ i . ⁇ 1 ) /s, wherein C is a transmission link capacity, ⁇ i is the weight or quantum value of class i, ⁇ 1 is the epoch of class 1 and s is an average data packet size.
  • the one or more processers then schedules at most (C . ⁇ i . ⁇ 1 ) /sdata packets or C . ⁇ i . ⁇ 1 bits of data for transmission.
  • ⁇ 1 defines the maximal number of bits that a class is allowed to transmit during ⁇ 1 .
  • scheduling is based on per ⁇ 1 interval. This means that the maximum number of packets needs to be calculated in accordance to ⁇ 1 .
  • the scheduler schedules every data packet (or bits) from that class.
  • the one or more processors are configured to, in determining a schedule for transmission of data packets of the plurality of classes, compare a number of data packets to be transmitted in each of the plurality of classes to the maximum number of data packets transmittable for each of the plurality of classes.
  • This calculation of the maximum number of data packets to be transmitted and/or the comparison to the number of data packets that can be transmitted during each ⁇ 1 can be thought of as the second indication of the transmission priority.
  • the maximum number of data packets transmittable for each of the plurality of classes is determined a time at which a transmission epoch of the class with a highest first transmission priority begins.
  • the highest first transmission priority represents the class 1, e.g., the class to be scheduled first by strict priority.
  • the one or more processors will schedule the data packets based on their strict priority for ordering the data packets and in consideration of the second indication of transmission priority e.g., the number of data packets compared to the maximum data packets to be transmitted in each class.
  • the second indication of transmission priority e.g., the number of data packets compared to the maximum data packets to be transmitted in each class.
  • the one or more processors of this embodiment are configured to determine if the number of data packets to be transmitted in each of the plurality of classes is less than or equal to the maximum number of data packets that can be transmitted for a class, then schedule all the data packets of that class for transmission. However, if the number of data packets to be transmitted in each of the plurality of classes is more than the maximum number of data packets that can be transmitted for a class, the one or more processors will schedule the maximum number of data packets that can be transmitted for each class for transmission. This is demonstrated in figures 9a to 9d wherein data packets from all classes are transmitted in each epoch with the maximum number of data packets transferred in each class in each epoch.
  • weights may increase the complexity of the system slightly as the one or more processors may need to update the weights whenever bandwidth reservation of service classes changes. Furthermore, the one or more processors are also required to compute how many data packets/bits from a class can be transmitted.
  • the scheduling schemes described in the above embodiments that are implemented using a data processing apparatus comprised of one or more processors may also be encapsulated by corresponding methods that could be implemented by the apparatus in order to provide said scheduling schemes.
  • this disclosure also includes a method of data processing for scheduling the transmission of data packets in a network, each data packet belonging to a class and each class having a predetermined first transmission priority associated therewith.
  • the method of this disclosure includes receiving, for one or more of a plurality of classes, data packets to be transmitted. These are part of each class of data packets to be transmitted.
  • the method then calculates a second indication of the transmission priority of each of the plurality of classes. As described in the above embodiments, this may be an urgentness parameter or be based on the weightings assigned to each class as in the second embodiment.
  • the method further comprises the step of determining a schedule for transmission of the data packets of the plurality of classes in dependence on the first transmission priority (strict priority) and the second indication of the transmission priority (as described in each embodiment) . Further method steps are apparent from the functions of the one or processors of the data processing apparatus described above.
  • inventions of the present disclosure provide latency and jitter control in industrial networks while achieving lower implementation complexity than known scheduling schemes.
  • the embodiments further solve the deadline violation issue in the pulsed queue model proposed by the IEEE 802.1Q WG and can flexibly support general configuration of a network cycle (or network epoch) rather than the previously used “multiple integer” relationship between a network cycle and an application cycle.
  • the embodiments described herein are demonstrated with specific examples, but the apparatus and method can be applied to support a wider range of application cycles as will be readily understood.
  • the embodiments of the present disclosure are able to simultaneously guarantee different latency requirements ranging from several hundreds of microseconds to ⁇ 100ms.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure provides a data processing apparatus for implementing a hybrid scheduling scheme for scheduling data packets in a network, the data processing apparatus comprising one or more processors configured to process data packets, each data packet belonging to a class and each class having a predetermined first transmission priority associated therewith: receive, for one or more of a plurality of classes, data packets of data to be transmitted; calculate a second indication of the transmission priority of each of the plurality of classes, schedule the data packets of the plurality of classes for transmission in dependence on the first transmission priority and the calculated second indication of the transmission priority.

Description

A DEVICE AND METHODOLOGY FOR HYBRID SCHEDULING USING STRICT PRIORITY AND PACKET URGENTNESS FIELD OF THE INVENTION
This invention relates to an apparatus and method for scheduling network packets by employing a hybrid scheduling approach that introduces an additional parameter of urgentness or weighting/quantum values that must be considered alongside strict priority when scheduling data packets for transmission.
BACKGROUND
In industrial networking, 10%to 20%of traffic is isochronous traffic with the lowest deadline and jittering requirements and 80%to 90%of the traffic is real-time traffic where bounded delay/jitter is required. A general way of epoch configuration will cause deadline violation, i.e., packets cannot be delivered before their deadlines. As such, it is difficult to ensure that data transmission meets with scheduling deadlines required. Traditional scheduling schemes such as Generalised processor scheduling, Fair Queuing, Weighted Fair Queuing, Round Robin, Deficit Round Robin or Weighted Round robin are designed to improve fairness between different flows of data transmissions. In order to achieve this these schemes normally rely on mathematical tools such as queuing theory or network calculus to compute the average waiting time or maximum latency bound of the scheduling scheme. It is well-known that the external arrival process needs to be Poissonian in queueing theory (e.g., latency analysis of the Probability Priority scheduler) , which cannot accurately describe real data traffic arrivals. On the other hand, latency bounds calculated by network calculus are either too loose (broad) , compromise network utilization or are too costly to compute.
An alternate way to address the issue of scheduling currently known is to employ algorithms that are specifically designed for providing an end-to-end (E2E) latency guarantee. Examples of such algorithms that are known include RSCP/WFQ/VC/Jitter-EDD/UBS. However, the scheduling optimality of these  schemes is usually achieved at the cost of increased implementation complexity (e.g., per-flow queuing, sorted queues, a large amount of FIFO queues, maintenance of flow state, or time/frequency synchronization) , and as such cannot be supported by existing commodity hardware.
In an attempt to solve the scheduling issues described above Paternoster scheduling scheme was proposed to the IEEE 802.1 community. Paternoster is a simple real-time packet bandwidth reservation, policing, queuing, and transmission scheduling algorithm that provides bounded end-to-end delays without requiring clock synchronization between network nodes. This was further developed to include the concept of multiple cyclic queuing and forwarding and is a variant of the paternoster scheduling algorithm previously proposed. The Paternoster scheduling algorithm is compatible with a Differentiated Services architecture where packets are classified into 8 priority classes. Unlike traditional Differentiated Services architecture (where a single queue is associated with each priority class) , the Paternoster scheduling algorithm defines 4 output queues that are associated with each priority class on each output port, see figure 1. These 4 queues are defined as prior, current, next, and last. Each priority class is associated with a unique epoch time. Queuing of data packets to be transmitted is performed in accordance to the bandwidth reservation for each priority class. During each epoch of data transmission, only the prior queue can transmit data packets. The prior queue is used for transmission only and it does not receive packets. Instead, received data packets that are associated with any given reservation are added to the “current” queue. This continues for additional received data until the addition of a data packet would exceed that reservation’s bandwidth allocation for an epoch. In such a case, the data packets that would exceed the reservation’s bandwidth are added to the next and last queues respectively until each of their reservation’s bandwidths are also full. Once each of the queues have been filled by data packets to be transmitted in future epochs, e.g., the “last” queue is completely reserved, any additional frames of data packets that are received are dropped until the next epoch begins and the queues rotate. Time synchronization is not required in Paternoster.
An example of queue rotation utilised in Paternoster scheduling is illustrated in figure 2. When a new epoch begins, the “current” queue becomes the “prior” queue, the “next” and “last” queues (and their remaining allocation for each reservation) become “current” , and “next” respectively. The previous prior queue (which should now be  empty because the data packets have been transmitted in the previous epoch) becomes the new “last” queue. This Paternoster operation repeats at each epoch such that the four queues alternate during each epoch. However, in Paternoster scheduling the packets from the different priority classes within each queue are scheduled in accordance with a strict priority scheduling scheme that is intrinsic to each class.
The duration of an epoch given by Δ does not have to be the same for each service class. The disadvantage of this scheduling scheme however is that the original paternoster algorithm still allows for deadline violation, as shown in figures 3, and 3a to 3d. In order to fix this issue an option would be to ensure that the epoch of the priority classes should be configured to have multiple values of Δ 1, where Δ 1 is the smallest epoch associated with the highest priority class as shown in figure 4.
Fig. 4 illustrates using multiple values for the epoch of each class Δ i on a single output port. This requires a common base of epoch times between classes in order to segment the total time in a uniform manner. In an example wherein the cycle times of two classes is 200 microsecond and 125 microseconds they cannot both be accommodated using multiple epoch values as there is no common base. Fig. 4 illustrates the case where four values of epoch values are run simultaneously as in figure 3. In the example of figure 4, the epoch time for class 6 runs at the highest priority, e.g., has the shortest epoch and can be called for transmission the fastest. The classes are shown on the right hand side of figure 4. The epochs of class 5 in figure 4 are called four times slower than those of class 6 as demonstrated by one epoch of class 5 taking the same time to call as four epochs of class 6 and therefore the buffer of class 5 runs at a priority 5 with is less than the priority of class 6 (which is 6) . The epochs of class 4 are similarly slower than those of both classes 5 and 6, and are slower than those of class 6 by a factor of 8. The same is true of class 3 which is a factor of 24 slower than that of class 6. The letters present in figure 4 represent which buffer of the class is output during each cycle (epoch) . There may be 9 buffers, “a” through to “i” (buffer i is not shown) . In the example of figure 4 priority 6 uses three buffers due to the short epoch times and the other priorities use two each as they have longer epoch times.
The problem with this approach however is that such a configuration is not always possible when the application cycle time used by different vendors is not common to each vendor and does not have a common base.
An example of vendors with different application cycle times can be seen in figure 5. The cycle times requirements of different vendors may be based on their technology, which cannot be changed without substantial effort. These requirements may be based on hardware dependencies that are independent of the capabilities of the communication part of the device. Servo drives from different vendors (A and B) may be working as part of the same network and for specific reasons the vendors may be limited in the choice of the period for the control loop. The cycle time from Servo A1 to Servo A2 is 31.25μs. A least common divisor does not exist between the cycle times from Vendor A and B. In other words, the cycle times for one vendor is not a multiple of the cycle time of the other vendor and therefore the cycle time cannot be uniformly broken down into epochs that suit all vendors. As such, the configuration of the network cycles (or epochs) is not possible using the integer multiples of Δ 1 model in figure 4. Vendor A, thus, needs to modify the cycle time of its application, incurring longer configuration time and additional operational overhead in order to be compatible with the cycle time of vendor B. from the configuration of figure 5 the following communication relations are expected to be possible:
· Servodrive A1 ←→ Servodrive A2 at 31.25 microseconds
· Servodrive A1 ←→ Servodrive A2 at 50 microseconds
· Controller ←→ Servodrive A1 at 125 microseconds
· Controller ←→ Servodrive B1 at 200 microseconds
· Servodrive ←→ B1 at 1 millisecond.
A general approach to configure the epochs for such a use-case is presented in figure 6. In this general approach where different epochs (cycles) for data exchange are not multiples of each other, e.g., do not have a common base but instead fractions of a common base, it is not possible to use an integer epoch model discussed above. As such, even with such a general approach deadline violation of data packets will still occur and data packets will be delayed.
The present disclosure is therefore designed to solve the deadline violation issue described above. The general model described in this disclosure to configure  network epochs is more flexible than the state of the art and supports a wider range of application cycle times and achieves lower implementational complexity.
SUMMARY
A data processing apparatus for implementing a hybrid scheduling scheme for scheduling data packets in a network, the data processing apparatus comprising one or more processors configured to process data packets, each data packet belonging to a class and each class having a predetermined first transmission priority associated therewith: receive, for one or more of a plurality of classes, data packets of data to be transmitted; calculate a second indication of the transmission priority of each of the plurality of classes, schedule the data packets of the plurality of classes for transmission in dependence on the first transmission priority and the calculated second indication of the transmission priority. This provides latency and jitter control in industrial networks with deterministic requirement by solving the issue of deadline violation.
A data processing apparatus as described above, wherein the first transmission priority is based on the strict priority of each of the plurality of classes. This provides a first metric by which to classify the classes and schedule the data packets for transmission.
A data processing apparatus as described above, wherein the second indication of the transmission priority is a calculated urgentness value based on a maximum latency of each of the plurality of classes, a current transmission epoch time and a start time for a transmission epoch for each of the plurality of classes. This provides a secondary means of prioritising the classes and data packets in order to overwrite the initial prioritisation to meet the transmission deadlines.
A data processing apparatus as described above, wherein the one or more processors is configured to calculate the second indication using the expression 
Figure PCTCN2022104736-appb-000001
where L i is the maximum latency of class i, t now is
Figure PCTCN2022104736-appb-000002
and represents a time at which a transmission epoch k starts in class 1, and
Figure PCTCN2022104736-appb-000003
is a time at which a transmission epoch j in a class i starts. This provides a secondary means  of prioritising the classes and data packets in order to overwrite the initial prioritisation to meet the transmission deadlines.
A data processing apparatus as described above, wherein the one or more processors is further configured to, in determining a schedule for transmission of data packets of the plurality of classes, compare the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes to determine whether they are the same; then if the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes are the same, schedule transmission of the data packets of the plurality of classes based on the first transmission priority of each of the plurality of classes; or if the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes are not the same, schedule transmission of the data packets of the plurality of classes based on the second indication of the transmission priority of each of the plurality of classes. This provides a means to supersede the strict priority associated to each class using a secondary metric that is based on the need to meet the transmission deadline and avoid deadline violation.
A data processing apparatus as described above, wherein the second indication of the transmission priority is a weight or quantum value assigned to each of the plurality of classes. This provides an alternate means of avoiding the transmission violation of data packets using an alternate metric to the urgentness.
A data processing apparatus as described above, wherein the weight or quantum value assigned to each of the plurality of classes allocates a transmission bandwidth for each of the plurality of classes. This provides sufficient bandwidth for transmission of the data packets within each class such that transmission deadline violation does not occur.
A data processing apparatus as described above, wherein the one or more processors is further configured to calculate a maximum number of data packets that each of the plurality of classes transmits is given by the expression (C . ω i . Δ 1) /s, wherein C is a transmission link capacity, ω i is the weight or quantum value of class i, Δ 1 is the epoch of class 1 and s is an average data packet size. This provides a  means for determining the correct number of data packets that can be transmitted in each epoch.
A data processing apparatus as described above, wherein the one or more processors is further configured to, in determining a schedule for transmission of data packets of the plurality of classes, compare a number of data packets to be transmitted in each of the plurality of classes to the maximum number of data packets transmittable for each of the plurality of classes; and if the number of data packets to be transmitted in each of the plurality of classes is less than or equal to the maximum number of data packets that can be transmitted for a class, then schedule all the data packets of that class for transmission, if the number of data packets to be transmitted in each of the plurality of classes is more than the maximum number of data packets that can be transmitted for a class, schedule the maximum number of data packets that can be transmitted for each class for transmission. This provides a means for transmitting the maximum number of data packets in each class in each epoch thus allowing transmission deadlines for data packets in each class to be met.
A data processing apparatus as described above, wherein the maximum number of data packets transmittable for each of the plurality of classes is determined a time at which a transmission epoch of the class with a highest first transmission priority begins. This provides a means for ensuring the maximum number of data packets that can be transmitted in each class in each epoch is updated based on the current bandwidth capacity and the class priority.
A data processing apparatus as described above, wherein the one or more processors is configured to assign an epoch time in which to transmit the data packets of a class to each of the plurality of classes. This ensures that each class has a designated time within which to have data packets scheduled and to transmit said data packets.
A method of data processing for scheduling the transmission of data packets in a network, each data packet belonging to a class and each class having a predetermined first transmission priority associated therewith, the method comprising: receiving, for one or more of a plurality of classes, data packets of data to be transmitted; calculating a second indication of the transmission priority of each of the  plurality of classes; determining a schedule for transmission of the data packets of the plurality of classes in dependence on the first transmission priority and the second indication of the transmission priority. This provides latency and jitter control in industrial networks with deterministic requirement by solving the issue of deadline violation.
A method as described above, wherein the first transmission priority is based on based on the strict priority of each of the plurality of classes. This provides a first metric by which to classify the classes and schedule the data packets for transmission.
A method as described above, wherein the second indication of the transmission priority is a calculated urgentness value based on a maximum latency of each of the plurality of classes, a current transmission epoch time and a start time for a transmission epoch for each of the plurality of classes. This provides a secondary means of prioritising the classes and data packets in order to overwrite the initial prioritisation to meet the transmission deadlines.
A method as described above, wherein the second indication is calculated using the expression
Figure PCTCN2022104736-appb-000004
where L i is the maximum latency of class i, t now is 
Figure PCTCN2022104736-appb-000005
and represents a time at which a transmission epoch k starts in class 1, and
Figure PCTCN2022104736-appb-000006
is a time at which a transmission epoch j in a class i starts. This provides a secondary means of prioritising the classes and data packets in order to overwrite the initial prioritisation to meet the transmission deadlines.
A method as described above, wherein the method step of determining a schedule for transmission of data packets of the plurality of classes comprises: comparing the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes to determine whether they are the same; then if the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes are the same, scheduling transmission of the data packets of the plurality of classes based on the first transmission priority of each of the plurality of classes; or if the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes are not the same, scheduling transmission of the data packets of the plurality of classes based on the second indication of the transmission priority  of each of the plurality of classes. This provides a means to supersede the strict priority associated to each class using a secondary metric that is based on the need to meet the transmission deadline and avoid deadline violation.
A method as described above, wherein the second indication of the transmission priority is a weight or quantum value assigned to each of the plurality of classes. This provides an alternate means of avoiding the transmission violation of data packets using an alternate metric to the urgentness.
A method as described above, wherein the weight or quantum value assigned to each of the plurality of classes allocates a transmission bandwidth for each of the plurality of classes. This provides sufficient bandwidth for transmission of the data packets within each class such that transmission deadline violation does not occur.
A method as described above, wherein a maximum number of data packets that each of the plurality of classes transmits in Δ 1 is given by the expression (C . ω i . Δ 1) s, wherein C is a transmission link capacity, ω i is the weight or quantum value of class i, Δ 1 is the epoch of class 1 and s is an average data packet size. This provides a means for determining the correct number of data packets that can be transmitted in each epoch.
A method as described above, the method step of determining a schedule for transmission of data packets of the plurality of classes comprises: comparing a number of data packets to be transmitted in each of the plurality of classes to the maximum number of data packets transmittable for each of the plurality of classes; and if the number of data packets to be transmitted in each of the plurality of classes is less than or equal to the maximum number of data packets that can be transmitted for a class, then scheduling all the data packets of that class for transmission, if the number of data packets to be transmitted in each of the plurality of classes is more than the maximum number of data packets that can be transmitted for a class, scheduling the maximum number of data packets that can be transmitted for each class for transmission. This provides a means for transmitting the maximum number of data packets in each class in each epoch thus allowing transmission deadlines for data packets in each class to be met.
A method as described above, wherein the maximum number of data packets transmittable for each of the plurality of classes is determined a time at which a  transmission epoch of the class with a highest first transmission priority begins. This provides a means for ensuring the maximum number of data packets that can be transmitted in each class in each epoch is updated based on the current bandwidth capacity and the class priority.
A method as described above, wherein each of the plurality of classes is assigned an epoch time in which to transmit the data packets of that class. This ensures that each class has a designated time within which to have data packets scheduled and to transmit said data packets.
BRIEF DESCRIPTION OF THE FIGURES
The present invention will now be described by way of example with reference to the accompanying drawings. In the drawings:
Fig. 1 illustrates an architecture of a Paternoster scheduling scheme.
Fig. 2 illustrates the timing and queuing in a Paternoster scheduling scheme.
Fig. 3 illustrates an example of class data packet scheduling in a Paternoster scheduling scheme;
Figs. 3a to 3d illustrate the scheduling priority of data packets and the occurrence of deadline violation in four classes in four epochs of a Paternoster scheduling scheme;
Fig. 4 illustrates how multiple different epoch values may be applied to a plurality of classes;
Fig. 5 illustrates a use-case where drivers have applications that do not have common epoch times;
Fig. 6 illustrates a general way to configure the epochs of service classes for the use-case where application cycle times do not have a common base;
Fig. 7 illustrates an example of class data packet scheduling using a hybrid parameter scheduling scheme that includes urgentness;
Figs. 7a to 7d illustrate the scheduling priority of data packets in four classes in four epochs of a hybrid scheduling scheme of the first embodiment of this disclosure;
Fig. 8a illustrates a simulation result of the transmission delay for each class using Omnest for a scheduling scheme that relies on strict priority only;
Fig. 8b illustrates a simulation result of the transmission delay for each class using Omnest for a hybrid scheduling scheme that is based on strict priority and an urgentness parameter;
Fig. 8c illustrates a simulation result of the service time deviation for each class using Omnest for a scheduling scheme that is based on strict priority only;
Fig. 8d illustrates a simulation result of the service time deviation for each class using Omnest for a hybrid scheduling scheme that is based on strict priority and a second indication of the second embodiment of this disclosure;
Fig. 9 illustrates an example of class data packet scheduling using a hybrid parameter scheduling scheme that includes assigning weights to each class;
Figs. 9a to 9d illustrate the scheduling priority of data packets in four classes in four epochs of a hybrid scheduling scheme of the second embodiment of this disclosure.
DETAILED DESCRIPTION
The embodiments of this disclosure address the issues discussed in past scheduling methods. The proposed HSSUW (Hybrid Scheduling using Strict priority and packet Urgentness or transmission Weight) scheme is a methodology that is able to support latency guarantee for any type of epoch configuration. It provides more flexibility for cycle time configuration at the end-station in an industrial networking environment without needing to use the previously demanded rule of “multiple integers” between network and application cycles as described above.
The embodiments of the present disclosure will now be described in detail in relation to the figures.
Fig. 3 of the present disclosure illustrates an example of scheduling data packets for transmission using only the strict priority associated with each class. In the Differentiated services model with 8 priority classes. Each class is assigned an epoch time Δ i (where i is the priority class) . Lower priority classes use a larger epoch, such as those of class 4, while higher priority classes use a smaller epoch, i.e., Δ 1 < …<Δ 8, where class 1 has the higher priority as defined in this disclosure. Each priority class on each egress port is configured with four queues, prior, current, next, and last. During each epoch, only queue prior is transmitting packets, the other three queues are receiving packets. The queues are configured to rotate, and this occurs at the beginning of every epoch.
For the purpose of the example of figure 3 it has been assumed that the link capacity C is 1 Gbps, the epoch Δ i per class i and the packet size is s, which is assumed to be 1000 bits. A further assumption in this example is that the number of data packets that can be transmitted per epoch is 10. As can be seen in figure 3 and table 1, each class has an associated epoch, in this case, epochs of different lengths of time, for example class 1 data has an epoch of 10 microseconds. As seen in table 1 the data processing apparatus may determine how many packets of data can be transmitted in each epoch. This is further demonstrated in figures 3a to 3d, which shows the four consecutive epochs used to transmit packets of data. As can be seen in figures 3a to 3d when data packets are scheduled to be transmitted using only strict priority, the maximum data packets transmittable by each class are transmitted in each epoch. For example, the maximum number of data packets of class 1 that can be transmitted is two, for class two the maximum number of data packets is four, for class three the maximum number of data packets transmittable is nine and for class four the maximum number of data packets that can be transmitted is twelve.
When strict priority is used, the maximum number of data packets per class are transmitted in each epoch as seen in figure 3a. As shown in this figure, this leads to two data packets from class 1 being scheduled, four data packets from class 2 being scheduled and four data packets from class 3 being scheduled for transmission. As only 10 data packets can be transmitted per epoch in this example and the data packets are scheduled based on strict priority, the data packets from class 4 are not scheduled in the first epoch. In figure 3b, two more data packets from class 1 are scheduled for transmission, but since all four data packets that can be scheduled from class 2 have been scheduled for transmission and transmitted in the previous  epoch of figure 3a, no further class 2 data packets can be scheduled at this stage. As such there is more capacity for  class  3 and 4 data packets that have not yet been scheduled. Therefore, since in the example there are five class 3 data packets remaining, they are scheduled for transmission and the remainder of the 3 slots are allocated to class 4 data packets.
In the epoch of figure 3c, there are again a maximum of two class 1 data packets to be scheduled for transmission, as well as four more class 2 data packets as class 2 has entered a new epoch. These are therefore scheduled in the first six slots for transmission. There are no more class 3 data packet to be transmitted at this point as the 9 data packets have been transmitted in the first and second epochs. As such, the remining four scheduled slots can be allocated to class 4 data packets of which there are 9 remaining. However, since there are only four slots remaining only four data packets from class 4 can be allocated and thus there will be 5 data packets that do not meet the transmission deadline based on their strict priority. This is because in the fourth epoch, there will be additional data packets from  classes  1 and 3 that will take priority over those of class 4 and new data packets from class 4 will also be available for transmission. The use of strict priority alone does not prevent a deadline violation.
In a first embodiment of this disclosure, a further parameter has been added to specify the urgentness of data packets in addition to the strict priority already associated with the data packets of each class. There is therefore provided in this disclosure a novel method and apparatus for hybrid strict priority and urgentness scheduling. That is, data packet scheduling between different classes is based on dual-parameters urgentness and priority. The urgentness associated with each class may be based on the maximum latency of each of the plurality of classes. In other words, the maximum time within which data packets from a class can be transmitted and meet the transmission deadline.
The urgentness parameter γ i that is assigned to each class is defined as
Figure PCTCN2022104736-appb-000007
where L i is the maximum latency/jitter bound promised by priority class i, t now is the current time (i.e., 
Figure PCTCN2022104736-appb-000008
) , where
Figure PCTCN2022104736-appb-000009
is the start time of the kth epoch from class 1) , and
Figure PCTCN2022104736-appb-000010
is the time that the j th epoch of class i starts. In this scenario both t now and
Figure PCTCN2022104736-appb-000011
are local time, and Δ 1 is the epoch of the highest  priority class 1. This value of urgentness is calculated as a second indication of the transmission priority
This new parameter of urgentness γ i is an example of a second indication of the transmission priority (second transmission priority) that is assigned to each class and used in addition to the strict priority p i of each class (traffic class) during scheduling. The strict priority of each class is an example of a first transmission priority (first indication of the transmission priority) .
In other words, the parameters γ i, p i form a dual-parameter that can be used by to schedule the data packets to be transmitted. In scheduling which data packets take priority over others the apparatus may compare the second indications of each class and the first indications of each class. For example, the one or more processors of the apparatus may compare the second indication, e.g., the urgentness γ i, of each class to each other and determine that if the urgentness γ i of two classes is the same, the data packets of those classes will be scheduled based on the first transmission priorities e.g., the strict priorities p i. If on the other hand a first class and second class have second indications that are different, the one or more processors or a scheduler are configured to overwrite the first transmission priorities and schedule the data packets of the class with the lower second indication ahead of the data packets of the class with the higher second indication, regardless of the strict priorities of each class. In other words, the urgentness parameter would overwrite the strict priority parameter, e.g., the second indication would overwrite the first indication of transmission priority. The lower the urgentness parameter the more urgent it is to transmit the data packet as the epoch for the class that contains the data packet started the greatest time ago and thus normally has the shortest time remaining to be scheduled and transmitted in order to meet the transmission deadline.
Fig. 7 illustrates an example of the hybrid scheduling scheme of this disclosure that introduces the second indication of urgentness in addition to the strict priority scheme that is the basis for Paternoster scheduling. It should be understood that the number of classes, epoch times etc of figure 7 are used by way of example only to demonstrate how the data processing apparatus determines which data packets to schedule for transmission. The configuration of the network epochs, bandwidth reservation and the data packet sizes are the same as those described above in  relation to figure 3. In this embodiment the first three epochs proceed in the same manner as those described above in relation to figure 3a to 3c as the urgentness values are the same, however the fourth network epoch
Figure PCTCN2022104736-appb-000012
differs from that described above in relation to figure 3d, in particular in how the class 4 data packets are scheduled in order to avoid deadline violation. At the beginning of epoch
Figure PCTCN2022104736-appb-000013
the epoch of Δ 4 of the class 4 data packets started 30μs ago. The one or more processors of the data processing apparatus are therefore configured to calculate the second indication (urgentness) of the class 4 data packets, which is computed as γ 4 =40μs -30μs = 10μs at the beginning of epoch
Figure PCTCN2022104736-appb-000014
in this example. The one or more processors also calculate the urgentness of class 3 data packets in the same way which gives an urgentness value of γ 4 = 30μs at the same time the urgentness of the class 4 data packets are calculated. The one or more processors will then schedule which of the class 4 or class 3 data packets will be scheduled first based on these calculations of the second indication of each and the strict priority of each class. In this case, the calculated second indication of class 4 is determined to be smaller than that of class 3 discussed above. Hence, class 4 data packets are scheduled before the transmission of class 3 packets in the fourth epoch in figure 9d. This ensures that all the data packets are delivered before their deadlines and no deadline violation occurs.
If, however, the urgentness of the class 3 and the class 4 data packets did not have the above values and were instead the same, then the one or more processors of the data processing apparatus would schedule the class 3 data packets before the class 4 data packets based on the first indication (strict priority) of each class.
Fig. 8a to 8d illustrates a simulation produced using Omnest that demonstrates the results of employing a data processing apparatus that schedules data packets for transmission using only a first transmission priority (figures 8a and 8c) and with both a first transmission priority (strict priority) and a second transmission priority (urgentness) as in figures 8b and 8d. In the figures 8a to 8d, dotted line represents a threshold at which deadline violation occurs. The bounds of each of the blocks seen in the figures represent serve time deviation (jitter) , which is defined as the requested latency of the network minus the actual delivery time. For example, a deadline violation will result in a negative jitter as the requested latency (period to transmit the data packet) is less than the actual time taken to transmit the data packet e.g., in  class 4, and thus the data packets from this class will miss their transmission deadline because they were not scheduled appropriately. As we can see in Fig. 8a, the original Paternoster scheduling algorithm that uses only strict priority not only causes deadline violation for class 4 data packets, but incurs larger service time deviation (e.g., jitter) as class 4 data packets are not scheduled based on strict priority in a timely manner and must wait for additional epochs for them to be transmitted. This is shown in figure 8c which demonstrates the negative service time deviation of class 4 data packets that arises because the data packets were not scheduled and transmitted within the requested latency of the network class and thus the deadline for transmitting these data packets was missed.
By introducing the urgentness parameter as the second indication as in the present disclosure, the class 4 packets are not only delivered within their deadlines but experience less jitter. This can be seen in figures 8b and 8d which do not demonstrate any deadline violation in the scheduling and transmission of the class 4 data packets as evidenced by the reduced millisecond delay boundaries for each of the classes. As can be seen in figure 8d, the service time deviation of each of the classes is positive, particularly that of class 4 data packets signifying that the data packets are scheduled and transmitted within the deadline and no transmission deadline violation occurs when scheduling is performed based on the urgentness parameter and the strict priority.
In a second embodiment of the present disclosure the one or more processors of the data processing apparatus is configured to implement a hybrid scheduling scheme in which a second indication of the transmission priority is a weight ω i assigned to each class i. This scheduling implementation is based on either data packets or bit-level. In this alternative implementation of the hybrid Paternoster scheduling a weight ω i (second indication of transmission priority) is assigned to each class i. In this embodiment data packet scheduling is performed in accordance to Δ 1, the epoch of service class 1. Figs. 9 and 9a to 9d illustrate the transmission order of data packets in each epoch
Figure PCTCN2022104736-appb-000015
using the approach of the second embodiment in which the second indication includes a calculation based on the weight assigned to each class. The weight assigned to each class may allocate a transmission bandwidth to each class within which the data packets can be scheduled and/or transmitted.
The one or more processors of the data processing apparatus of this embodiment are configured to implement a scheduling scheme that cycles over the 8 classes after every epoch Δ 1 period based on their strict priorities. As it does this the one or more processors are configured to assign a weight (as in Weighted Round Robin (WRR) ) or a quantum value (as in Deficit Round Robin (DRR) ) for each class, where
Figure PCTCN2022104736-appb-000016
Figure PCTCN2022104736-appb-000017
 (e.g., ω 1+ ω 2+…+ ω N = 1) . ω i may be a variable depending on the dynamic assignment of data packet priority on each hop. The hop is the hopcount that is defined as the number of intermediate networking nodes between a source and destination pair. The weight ω i (or quantum value) defines the fraction of bandwidth reservation allocated to class i data packet traffic.
Once a class i is chosen during scheduling by the one or more processors, the one or more processors is configured to calculate a maximum number of data packets that each of the plurality of classes transmits during Δ 1 is given by the expression (C . ω i . Δ 1) /s, wherein C is a transmission link capacity, ω i is the weight or quantum value of class i, Δ 1 is the epoch of class 1 and s is an average data packet size. The one or more processers then schedules at most (C . ω i . Δ 1) /sdata packets or C .ω i . Δ 1 bits of data for transmission. C . ω i . Δ 1 defines the maximal number of bits that a class is allowed to transmit during Δ 1. In the WRR and Paternoster scheme, scheduling is based on per Δ 1 interval. This means that the maximum number of packets needs to be calculated in accordance to Δ 1.
If there are less than (C . ω i . Δ 1) /s, data packets, or C . ω i . Δ 1 bits, in the queue of class i, the scheduler schedules every data packet (or bits) from that class. Each class for transmission that follows the first class e.g., i > 1, must maintain both its own epoch Δ i as well as the epoch of the first class Δ 1. As such, data packets from each priority class are scheduled during time period Δ 1. In other words, the one or more processors are configured to, in determining a schedule for transmission of data packets of the plurality of classes, compare a number of data packets to be transmitted in each of the plurality of classes to the maximum number of data packets transmittable for each of the plurality of classes. This calculation of the maximum number of data packets to be transmitted and/or the comparison to the number of data packets that can be transmitted during each Δ 1 can be thought of as the second indication of the transmission priority. The maximum number of data packets transmittable for each of the plurality of classes is determined a time at  which a transmission epoch of the class with a highest first transmission priority begins. In this regard the highest first transmission priority represents the class 1, e.g., the class to be scheduled first by strict priority.
Generally, the one or more processors will schedule the data packets based on their strict priority for ordering the data packets and in consideration of the second indication of transmission priority e.g., the number of data packets compared to the maximum data packets to be transmitted in each class. This can be seen in figures 9a to 9d, wherein the data packets from each class are transmitted in each epoch ordered in accordance with the strict priority but in consideration of the weightings and thus number of data packets to be transmitted in each class.
Specifically, the one or more processors of this embodiment are configured to determine if the number of data packets to be transmitted in each of the plurality of classes is less than or equal to the maximum number of data packets that can be transmitted for a class, then schedule all the data packets of that class for transmission. However, if the number of data packets to be transmitted in each of the plurality of classes is more than the maximum number of data packets that can be transmitted for a class, the one or more processors will schedule the maximum number of data packets that can be transmitted for each class for transmission. This is demonstrated in figures 9a to 9d wherein data packets from all classes are transmitted in each epoch with the maximum number of data packets transferred in each class in each epoch. This ensures that data packets from all classes are scheduled and transmitted by the apparatus in the within the requested latency and therefore no deadline violation of data packet transmission occurs. The addition of weights may increase the complexity of the system slightly as the one or more processors may need to update the weights whenever bandwidth reservation of service classes changes. Furthermore, the one or more processors are also required to compute how many data packets/bits from a class can be transmitted.
In figures 9a to 9d, we plot the transmission order in each Δ 1 k using the approach of weight assignment. Note that the implementation complexity of this approach might increase compare to the first one (by introducing the parameter of urgentness) , as the scheduler may need to update the weights whenever bandwidth reservation of service classes changes. The scheduler also needs to compute how many packets/bits from a class can be transmitted.
The scheduling schemes described in the above embodiments that are implemented using a data processing apparatus comprised of one or more processors may also be encapsulated by corresponding methods that could be implemented by the apparatus in order to provide said scheduling schemes.
Specifically, this disclosure also includes a method of data processing for scheduling the transmission of data packets in a network, each data packet belonging to a class and each class having a predetermined first transmission priority associated therewith. The method of this disclosure includes receiving, for one or more of a plurality of classes, data packets to be transmitted. These are part of each class of data packets to be transmitted. The method then calculates a second indication of the transmission priority of each of the plurality of classes. As described in the above embodiments, this may be an urgentness parameter or be based on the weightings assigned to each class as in the second embodiment. The method further comprises the step of determining a schedule for transmission of the data packets of the plurality of classes in dependence on the first transmission priority (strict priority) and the second indication of the transmission priority (as described in each embodiment) . Further method steps are apparent from the functions of the one or processors of the data processing apparatus described above.
These embodiments of the present disclosure provide latency and jitter control in industrial networks while achieving lower implementation complexity than known scheduling schemes. The embodiments further solve the deadline violation issue in the pulsed queue model proposed by the IEEE 802.1Q WG and can flexibly support general configuration of a network cycle (or network epoch) rather than the previously used “multiple integer” relationship between a network cycle and an application cycle. Furthermore, the embodiments described herein are demonstrated with specific examples, but the apparatus and method can be applied to support a wider range of application cycles as will be readily understood. In addition, the embodiments of the present disclosure are able to simultaneously guarantee different latency requirements ranging from several hundreds of microseconds to ~100ms.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art,  irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of the present invention may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.

Claims (22)

  1. A data processing apparatus for implementing a hybrid scheduling scheme for scheduling data packets in a network, the data processing apparatus comprising one or more processors configured to process data packets, each data packet belonging to a class and each class having a predetermined first transmission priority associated therewith:
    receive, for one or more of a plurality of classes, data packets of data to be transmitted;
    calculate a second indication of the transmission priority of each of the plurality of classes,
    schedule the data packets of the plurality of classes for transmission in dependence on the first transmission priority and the calculated second indication of the transmission priority.
  2. A data processing apparatus according to claim 1, wherein the first transmission priority is based on the strict priority of each of the plurality of classes.
  3. A data processing apparatus according to claims 1 or 2, wherein the second indication of the transmission priority is a calculated urgentness value based on a maximum latency of each of the plurality of classes, a current transmission epoch time and a start time for a transmission epoch for each of the plurality of classes.
  4. A data processing apparatus according to any of the preceding claims, wherein the one or more processors is configured to calculate the second indication using the expression
    Figure PCTCN2022104736-appb-100001
    where L i is the maximum latency of class i, t now is
    Figure PCTCN2022104736-appb-100002
    and represents a time at which a transmission epoch k starts in class 1, and
    Figure PCTCN2022104736-appb-100003
    is a time at which a transmission epoch j in a class i starts.
  5. A data processing apparatus according to any of the preceding claims, wherein the one or more processors is further configured to, in determining a  schedule for transmission of data packets of the plurality of classes, compare the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes to determine whether they are the same; then
    if the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes are the same, schedule transmission of the data packets of the plurality of classes based on the first transmission priority of each of the plurality of classes; or
    if the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes are not the same, schedule transmission of the data packets of the plurality of classes based on the second indication of the transmission priority of each of the plurality of classes.
  6. A data processing apparatus according to claim 1 or claim 2, wherein the second indication of the transmission priority is a weight or quantum value assigned to each of the plurality of classes.
  7. A data processing apparatus according to claim 6, wherein the weight or quantum value assigned to each of the plurality of classes allocates a transmission bandwidth for each of the plurality of classes.
  8. A data processing apparatus according to claim 6 or 7, wherein the one or more processors is further configured to calculate a maximum number of data packets that each of the plurality of classes transmits is given by the expression (C . ω i . Δ 1) /s, wherein C is a transmission link capacity, ω i is the weight or quantum value of class i, Δ 1 is the epoch of class 1 and s is an average data packet size.
  9. A data processing apparatus according to claim 8, wherein the one or more processors is further configured to, in determining a schedule for transmission of data packets of the plurality of classes, compare a number of data packets to be transmitted in each of the plurality of classes to the maximum number of data packets transmittable for each of the plurality of classes; and
    if the number of data packets to be transmitted in each of the plurality of classes is less than or equal to the maximum number of data packets that can be transmitted for a class, then schedule all the data packets of that class for transmission,
    if the number of data packets to be transmitted in each of the plurality of classes is more than the maximum number of data packets that can be transmitted for a class,
    schedule the maximum number of data packets that can be transmitted for each class for transmission.
  10. A data processing apparatus according to claim 9, wherein the maximum number of data packets transmittable for each of the plurality of classes is determined a time at which a transmission epoch of the class with a highest first transmission priority begins.
  11. A data processing apparatus according to any of the preceding claims, wherein the one or more processors is configured to assign an epoch time in which to transmit the data packets of a class to each of the plurality of classes.
  12. A method of data processing for scheduling the transmission of data packets in a network, each data packet belonging to a class and each class having a predetermined first transmission priority associated therewith, the method comprising:
    receiving, for one or more of a plurality of classes, data packets of data to be transmitted;
    calculating a second indication of the transmission priority of each of the plurality of classes;
    determining a schedule for transmission of the data packets of the plurality of classes in dependence on the first transmission priority and the second indication of the transmission priority.
  13. A method according to claim 12, wherein the first transmission priority is based on based on the strict priority of each of the plurality of classes.
  14. A method according to claim 12 or 13, wherein the second indication of the transmission priority is a calculated urgentness value based on a maximum latency of each of the plurality of classes, a current transmission epoch time and a start time for a transmission epoch for each of the plurality of classes.
  15. A method according to claim 12 to 14, wherein the second indication is calculated using the expression
    Figure PCTCN2022104736-appb-100004
    where L i is the maximum latency of class i, t now is
    Figure PCTCN2022104736-appb-100005
    and represents a time at which a transmission epoch k starts in class 1, and
    Figure PCTCN2022104736-appb-100006
    is a time at which a transmission epoch j in a class i starts.
  16. A method according to any one of claims 12 to 15, wherein the method step of determining a schedule for transmission of data packets of the plurality of classes comprises:
    comparing the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes to determine whether they are the same; then
    if the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes are the same, scheduling transmission of the data packets of the plurality of classes based on the first transmission priority of each of the plurality of classes; or
    if the second indication for a first class of the plurality of classes and the second indication of a second class of the plurality of classes are not the same, scheduling transmission of the data packets of the plurality of classes based on the second indication of the transmission priority of each of the plurality of classes.
  17. A method according to claim 12 or claim 13, wherein the second indication of the transmission priority is a weight or quantum value assigned to each of the plurality of classes.
  18. A method according to claim 17, wherein the weight or quantum value assigned to each of the plurality of classes allocates a transmission bandwidth for each of the plurality of classes.
  19. A method according to claim 17 or 18, wherein a maximum number of data packets that each of the plurality of classes transmits is given by the expression (C . ω i . Δ 1) /s, wherein C is a transmission link capacity, ω i is the weight or quantum value of class i, Δ 1 is the epoch of class 1 and s is an average data packet size.
  20. A method according to claim 19, the method step of determining a schedule for transmission of data packets of the plurality of classes comprises:
    comparing a number of data packets to be transmitted in each of the plurality of classes to the maximum number of data packets transmittable for each of the plurality of classes; and
    if the number of data packets to be transmitted in each of the plurality of classes is less than or equal to the maximum number of data packets that can be transmitted for a class, then scheduling all the data packets of that class for transmission,
    if the number of data packets to be transmitted in each of the plurality of classes is more than the maximum number of data packets that can be transmitted for a class,
    scheduling the maximum number of data packets that can be transmitted for each class for transmission.
  21. A method according to claim 20, wherein the maximum number of data packets transmittable for each of the plurality of classes is determined a time at which a transmission epoch of the class with a highest first transmission priority begins.
  22. A method according to any one of claims 12 to 21, wherein each of the plurality of classes is assigned an epoch time in which to transmit the data packets of that class.
PCT/CN2022/104736 2022-07-08 2022-07-08 A device and methodology for hybrid scheduling using strict priority and packet urgentness WO2024007334A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/104736 WO2024007334A1 (en) 2022-07-08 2022-07-08 A device and methodology for hybrid scheduling using strict priority and packet urgentness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/104736 WO2024007334A1 (en) 2022-07-08 2022-07-08 A device and methodology for hybrid scheduling using strict priority and packet urgentness

Publications (1)

Publication Number Publication Date
WO2024007334A1 true WO2024007334A1 (en) 2024-01-11

Family

ID=89454634

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/104736 WO2024007334A1 (en) 2022-07-08 2022-07-08 A device and methodology for hybrid scheduling using strict priority and packet urgentness

Country Status (1)

Country Link
WO (1) WO2024007334A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140078985A1 (en) * 2012-09-20 2014-03-20 Qualcomm Incorporated Apparatus and method for prioritizing non-scheduled data in a wireless communications network
WO2018033197A1 (en) * 2016-08-15 2018-02-22 Telefonaktiebolaget Lm Ericsson (Publ) Dynamic priority setting to improve peak rates in a wireless communication network
WO2018149102A1 (en) * 2017-02-20 2018-08-23 深圳市中兴微电子技术有限公司 Method and device for reducing transmission latency of high-priority data, and storage medium
WO2020220954A1 (en) * 2019-04-28 2020-11-05 华为技术有限公司 Scheduling priority determination method and apparatus
CN112600878A (en) * 2020-11-30 2021-04-02 新华三大数据技术有限公司 Data transmission method and device
WO2022142374A1 (en) * 2020-12-28 2022-07-07 大唐移动通信设备有限公司 Method and apparatus for determining queuing priority, and communication device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140078985A1 (en) * 2012-09-20 2014-03-20 Qualcomm Incorporated Apparatus and method for prioritizing non-scheduled data in a wireless communications network
WO2018033197A1 (en) * 2016-08-15 2018-02-22 Telefonaktiebolaget Lm Ericsson (Publ) Dynamic priority setting to improve peak rates in a wireless communication network
WO2018149102A1 (en) * 2017-02-20 2018-08-23 深圳市中兴微电子技术有限公司 Method and device for reducing transmission latency of high-priority data, and storage medium
WO2020220954A1 (en) * 2019-04-28 2020-11-05 华为技术有限公司 Scheduling priority determination method and apparatus
CN112600878A (en) * 2020-11-30 2021-04-02 新华三大数据技术有限公司 Data transmission method and device
WO2022142374A1 (en) * 2020-12-28 2022-07-07 大唐移动通信设备有限公司 Method and apparatus for determining queuing priority, and communication device and storage medium

Similar Documents

Publication Publication Date Title
KR100431191B1 (en) An apparatus and method for scheduling packets by using a round robin based on credit
US5831971A (en) Method for leaky bucket traffic shaping using fair queueing collision arbitration
Ramabhadran et al. Stratified round robin: A low complexity packet scheduler with bandwidth fairness and bounded delay
US7149227B2 (en) Round-robin arbiter with low jitter
US7142513B2 (en) Method and multi-queue packet scheduling system for managing network packet traffic with minimum performance guarantees and maximum service rate control
US20030200317A1 (en) Method and system for dynamically allocating bandwidth to a plurality of network elements
WO2001069852A2 (en) Data rate limiting
CN101414958A (en) Method and apparatus for scheduling business
AU2002339349B2 (en) Distributed transmission of traffic flows in communication networks
Ramabhadran et al. The stratified round robin scheduler: design, analysis and implementation
Liu et al. An efficient scheduling discipline for packet switching networks using earliest deadline first round robin
Soni et al. Optimizing network calculus for switched ethernet network with deficit round robin
EP1627482B1 (en) System and method for time-based scheduling
KR20120055946A (en) Method and apparatus for packet scheduling based on allocating fair bandwidth
Soni et al. Quantum assignment for QoS-aware AFDX network with deficit round robin
WO2024007334A1 (en) A device and methodology for hybrid scheduling using strict priority and packet urgentness
EP2063580B1 (en) Low complexity scheduler with generalized processor sharing GPS like scheduling performance
KR20020025723A (en) Bandwidth sharing using emulated weighted fair queuing
Hotescu et al. Scheduling rate constrained traffic in end systems of time-aware networks
US8467401B1 (en) Scheduling variable length packets
Wang et al. Integrating priority with share in the priority-based weighted fair queuing scheduler for real-time networks
EP4099649A1 (en) Integrated scheduler for iec/ieee 60802 end-stations
JP2946462B1 (en) Packet scheduling control method
Lim et al. Quantum-based earliest deadline first scheduling for multiservices
Soni et al. Impact of frame size and deadlines on WRR scheduling in a switched Ethernet network with critical and non-critical flows

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22949915

Country of ref document: EP

Kind code of ref document: A1