WO2022022224A1 - 调度数据包的方法和相关装置 - Google Patents

调度数据包的方法和相关装置 Download PDF

Info

Publication number
WO2022022224A1
WO2022022224A1 PCT/CN2021/104218 CN2021104218W WO2022022224A1 WO 2022022224 A1 WO2022022224 A1 WO 2022022224A1 CN 2021104218 W CN2021104218 W CN 2021104218W WO 2022022224 A1 WO2022022224 A1 WO 2022022224A1
Authority
WO
WIPO (PCT)
Prior art keywords
time
network device
data packet
queue
upper limit
Prior art date
Application number
PCT/CN2021/104218
Other languages
English (en)
French (fr)
Inventor
王闯
任首首
孟锐
刘冰洋
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21850966.9A priority Critical patent/EP4181480A4/en
Publication of WO2022022224A1 publication Critical patent/WO2022022224A1/zh
Priority to US18/162,542 priority patent/US20230179534A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/564Attaching a deadline to packets, e.g. earliest due date first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2416Real-time traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/568Calendar queues or timing rings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6225Fixed service order, e.g. Round Robin
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/826Involving periods of time

Definitions

  • the present application relates to the field of communication technologies, and more particularly, to a method and related apparatus for scheduling data packets.
  • Deterministic delay means that the delay and jitter experienced by the data packet transmission meet the upper bound on the premise that the data packet obeys certain burstiness requirements.
  • scalable data plane deterministic packet scheduling needs to be implemented.
  • the present application provides a method and related apparatus for scheduling data packets, which can make the end-to-end delay controllable.
  • an embodiment of the present application provides a method for scheduling data packets, including: a first network device receiving a data packet from a second network device in the network at a first moment; the first network device according to the data packet The carried time information and the first time, determine the first reference time, and the first reference time is the reference time when the data packet enters the queue in the first queue system of the first network device; the first network device according to the The first reference moment determines a target queue from a plurality of queues included in the first queue system and adds the data packet to the target queue, wherein the time information is used to indicate the first remaining processing time, and the first remaining processing time is N
  • the N network devices include the network devices that the data packet passes through after entering the network and before the first network device, and N is greater than or A positive integer equal to 1, the first theoretical upper limit of time is the upper limit of theoretical time that the data packet has experienced inside the network device from
  • the upper bound of the end-to-end delay of any flow can not exceed the sum of the theoretical upper limit of the time of all outgoing interfaces on the flow path. In this way, the end-to-end delay is controllable. Therefore, deterministic end-to-end latency can be provided for the data flow.
  • the time information includes first time indication information, where the first time indication information is used to indicate the time from the second reference moment to the second output moment, the second reference moment is the reference time of the data packet at the second network device, and the second output time is the time when the data packet is output from the second network device.
  • the time information further includes second time indication information, where the second time indication information is used to indicate the second remaining time Processing time, the second remaining processing time is the difference between the second theoretical upper limit of time and the second actual time, and the second theoretical upper limit is the time from the initial reference time to the second reference time when the data packet is inside the network device Elapsed theoretical upper limit of time, the second actual time is the actual time that the data packet has experienced inside the network device from the initial reference time to the time when the data packet enters the queue system of the second network device.
  • the second time indication information is used to indicate the second remaining time Processing time
  • the second remaining processing time is the difference between the second theoretical upper limit of time and the second actual time
  • the second theoretical upper limit is the time from the initial reference time to the second reference time when the data packet is inside the network device Elapsed theoretical upper limit of time
  • the second actual time is the actual time that the data packet has experienced inside the network device from the initial reference time to the time when the data packet enters the queue system of the second network device.
  • the time information further includes third time indication information, where the third time indication information is used to indicate a third theoretical upper limit of time associated with the second network device, the third time indication
  • the theoretical upper limit of time is a theoretical upper limit of the time that the data packet experiences inside the network device from the second reference time to the first reference time.
  • the plurality of queues correspond to a plurality of preset times one by one
  • the first network device determines from the plurality of queues included in the first queuing system according to the first reference time
  • a target queue including: the first network device determines, according to the first reference time, that the queue corresponding to the target time is the target queue, wherein the first reference time is not greater than the target time, and the first reference time and the target time does not include any one of the plurality of preset times, and the target time is one of the plurality of preset times.
  • the first reference time is a reference time when the data packet enters a queue in the first queue system of the first network device.
  • the second remaining processing time is the difference between the second theoretical upper limit of time and the second actual time, and the second theoretical upper limit of time is the theoretical value experienced by the data packet inside the network device from the initial reference time to the second reference time
  • the upper limit of time, the second actual time is the actual time that the data packet has experienced inside the network device from the initial reference time to the time when the data packet enters the queue system of the second network device.
  • the second output time is the time when the data packet is output from the second network device.
  • the second reference time is the reference time of the data packet at the second network device.
  • the first moment is the moment when the first network device receives the data packet.
  • the first network device determines the first reference time according to the time information and the first time, including: the first network device determines the first reference time according to the second time and the time information and the first moment to determine the first reference moment.
  • the second remaining processing time is the difference between the second theoretical upper limit of time and the second actual time, and the second theoretical upper limit of time is the theoretical value experienced by the data packet inside the network device from the initial reference time to the second reference time
  • the upper limit of time, the second actual time is the actual time that the data packet has experienced inside the network device from the initial reference time to the time when the data packet enters the queue system of the second network device.
  • the second output time is the time when the data packet is output from the second network device.
  • the second reference time is the reference time of the data packet at the second network device.
  • the first moment is the moment when the first network device receives the data packet.
  • the second time is the time when the data packet enters the first queuing system.
  • an embodiment of the present application provides a network device, where the network device includes a unit for implementing the first aspect or any possible design of the first aspect.
  • an embodiment of the present application provides a network device, including: a processor configured to execute a program stored in a memory, and when the program is executed, causes the network device to execute the first aspect or any of the first aspect.
  • a network device including: a processor configured to execute a program stored in a memory, and when the program is executed, causes the network device to execute the first aspect or any of the first aspect.
  • the memory is located outside the network device.
  • embodiments of the present application provide a computer-readable storage medium, including instructions, when the instructions are executed on a computer, the method according to the first aspect or any possible design of the first aspect is executed.
  • an embodiment of the present application provides a network device, the network device includes a processor, a memory, and an instruction stored in the memory and executable on the processor, when the instruction is executed, the network device A method of performing the first aspect or any of the possible designs of the first aspect described above.
  • Figure 1 is a schematic diagram of the reasons for the formation of burst accumulation.
  • FIG. 2 is a schematic diagram of a system to which embodiments of the present application can be applied.
  • FIG. 3 is a schematic structural block diagram of a router capable of implementing an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for scheduling data packets according to an embodiment of the present application.
  • FIG. 5 shows the correspondence between M queues and M moments.
  • FIG. 6 shows a sequence diagram of processing the packet by the ingress edge device 231 and the network device 232 .
  • FIG. 7 shows a sequence diagram of processing the packet by the ingress edge device 231 and the network device 232 .
  • FIG. 8 shows a sequence diagram of processing the packet by the network device 232 and the network device 233 .
  • FIG. 9 shows a sequence diagram of processing the packet by the network device 232 and the network device 233 .
  • FIG. 10 is a schematic diagram of D max corresponding to each network device in the core network 230 when a circular queue is implemented on the downlink board.
  • FIG. 11 is a schematic diagram of D max corresponding to each network device in the core network 230 when a circular queue is implemented on the uplink board.
  • FIG. 12 is a schematic flowchart of a method for scheduling data packets according to an embodiment of the present application.
  • FIG. 13 is a schematic structural block diagram of a network device provided according to an embodiment of the present application.
  • the network architecture and service scenarios described in the embodiments of the present application are for the purpose of illustrating the technical solutions of the embodiments of the present application more clearly, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application.
  • the evolution of the architecture and the emergence of new business scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • At least one means one or more, and “plurality” means two or more.
  • the character “/” generally indicates that the associated objects are an “or” relationship.
  • “At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
  • at least one item (a) of a, b, or c can represent: a, b, c, ab, ac, bc, or abc, where a, b, c can be single or multiple .
  • IP Internet Protocol
  • Burst accumulation is the root cause of delay uncertainty. Burst accumulation is caused by the squeezing of different packets.
  • Figure 1 is a schematic diagram of the reasons for the formation of burst accumulation.
  • FIG. 2 is a schematic diagram of a system to which embodiments of the present application can be applied.
  • the network 200 shown in FIG. 2 may be composed of an edge network 210 , an edge network 220 and a core network 230 .
  • Edge network 220 includes user equipment 221 .
  • Core network 230 includes ingress edge device 231 , network device 232 , network device 233 , network device 234 , and egress edge device 235 .
  • the user equipment 211 may communicate with the user equipment 221 through the core network.
  • the embodiments of the present application may be implemented by devices in the core network 230 .
  • it may be implemented by ingress edge device 231 , and may be implemented by network device 232 to network device 234 .
  • Devices capable of implementing the embodiments of the present application may be routers, switches, and the like.
  • FIG. 3 is a schematic structural block diagram of a router capable of implementing an embodiment of the present application.
  • the router 300 includes an upstream board 301 , a switch fabric 302 and a downstream board 303 .
  • the upstream board may also be called the upstream interface board.
  • Upstream board 301 may include multiple input ports.
  • the upstream board can decapsulate the data packets received by the input port, and use the forwarding table to find the output port. Once the output port is found (for ease of description, the found output port is hereinafter referred to as the target output port), the data packet is sent to the switch fabric 302 .
  • the data packet referred to in the embodiments of the present application may be a data packet (packet) in the network layer, or may be a frame (frame) in the data link layer.
  • Switch fabric 302 forwards received packets to one of the destination output ports. Specifically, the switch fabric 302 forwards the received data packet to the downstream board 303 including the target output port. Downstream boards may also be referred to as downstream interface boards.
  • the lower board 303 includes a plurality of output ports. Downstream board 303 receives data packets from switch fabric 302 .
  • the downlink board can perform buffer management and encapsulation processing on the received data packet, and then send the data packet to the next node through the target output port.
  • a router may include multiple upstream boards and/or multiple downstream boards.
  • the time refers to the time when the data packet is inside the network device, and does not include the time when the data packet is transmitted between network devices.
  • FIG. 4 is a schematic flowchart of a method for scheduling data packets according to an embodiment of the present application.
  • FIG. 4 is a description of a method for scheduling data packets provided by an embodiment of the present application with reference to FIG. 2 . It is assumed that the method for scheduling data packets in this embodiment of the present application is applied to the core network 230 in FIG. 2 .
  • Ingress edge device 231 may receive multiple streams.
  • the ingress edge device 231 handles each of the multiple flows in the same manner. It is assumed that the paths of the multiple flows received by the ingress edge device 231 are sequentially passed through: the ingress edge device 231 , the network device 232 , the network device 233 , the network device 234 , and the egress edge device 235 .
  • the ingress edge device 231 is the first network device in which the plurality of flows enter the core network 230 . Therefore, the ingress edge device 231 may also be referred to as a first-hop network device or a first-hop network device.
  • the network device 232 is the network device of the second hop
  • the network device 233 is the network device of the third hop
  • the network device 234 is the network device of the fourth hop
  • the egress edge device 235 is the network device of the fifth hop.
  • the average bandwidth reserved for the ith flow by the output port of each network device in the path is r i .
  • Multiple flows satisfy the flow model, which can be expressed by the following formula:
  • t is the time; A i (t) is the total data flow of the ith stream within t time; B i is the maximum burst degree of the ith stream.
  • the target stream is any one of the streams received by the ingress edge device 231 .
  • the ingress edge device 231 may shape the target flow, so that the target flow entering the core network 230 meets the requirements of the traffic model shown in formula 4.1.
  • the manner in which the ingress edge device 231 shapes the target stream may utilize an existing shaping algorithm, such as a leaky bucket algorithm, a token bucket algorithm, and the like.
  • the unit responsible for shaping the target stream in the ingress edge device 231 may be referred to as a shaping unit.
  • the ingress edge device 231 sends the data packet in the target flow to the next-hop network device (ie, the network device 232 ). Ingress edge device 231 also sends time information to network device 232 .
  • the shaped data packet will enter the output queue, and the data packet can be sent to the network device 232 by scheduling the output queue.
  • the unit in the ingress edge device 231 that loads the output queue may be referred to as an output queue unit, and the unit responsible for scheduling the output queue may be referred to as a scheduling unit.
  • time information 1 the time information sent by the ingress edge device 231 to the network device 232 is referred to as time information 1 below.
  • other network devices eg, network device 232, network device 233, network device 234, and egress edge device 235
  • the ingress edge device 231 have a round-robin scheduling queue system. Include multiple queues.
  • the network device will determine which queue to add the received data packet to according to a moment. This moment may be referred to as the reference moment.
  • the ingress edge device 231 may also have a round-robin scheduled queue system. In this case, the ingress edge device 231 can also determine which queue to add the received data packet to according to a moment. This moment may also be referred to as the reference moment.
  • the ingress edge device 231 may not have this queuing system. . In this case, the ingress edge device 231 will first input the data packet to the output queue system when outputting the data packet.
  • the moment when a packet enters the output queue system can also be referred to as the reference moment.
  • the reference time of the ingress edge device 231 may also be other time.
  • the reference time of the ingress edge device 231 may be the time when the data packet is received.
  • the reference time of the ingress edge device 231 may be the time when the data packet is output from the ingress edge device 231 .
  • the ingress edge device 231 does not have a polling scheduling queue system, and the reference time of the ingress edge device 231 is the time of entering the output queue system.
  • reference moment n is the reference time when the data packet enters the (outgoing) queuing system of the ingress edge device; reference time 2 is the reference time when the data packet enters the queue in the queuing system of the network device 232; reference time 3 is the reference time when the data packet enters The reference time of the queue in the queue system of the network device 233 .
  • the sender device is a network device that sends data packets
  • the receiver device is a network device that receives data packets. For example, if ingress edge device 231 sends a packet to network device 232 , then the sender device is ingress edge device 231 and the receiver device is network device 232 . For another example, if the network device 232 sends the data packet to the network device 233, the network device 232 is the sending end device, and the network device 233 is the receiving end device.
  • the reference time of the sending end device is the reference time when the data packet enters the queue in the queue system of the sending end device.
  • the reference time of the receiving end device refers to the reference time when the data packet enters the queue in the queue system of the receiving end device. For example, if the sending end device is the ingress edge device 231, the reference time of the sending end device is reference time 1; if the sending end device is the network device 232, then the reference time of the sending end device is reference time 2. For another example, if the receiving end device is the network device 232, then the reference time of the receiving end device is reference time 2; if the receiving end device is the network device 233, then the reference time of the receiving end device is reference time 3.
  • the theoretical upper limit of time is calculated based on the network calculus theory to obtain the maximum time required for a network device to process a data packet. In other words, the network device will not process packets longer than the theoretical time limit.
  • the theoretical upper limit of time does not include the transmission delay of transmitting data packets between network devices.
  • the theoretical upper limit of time from the sending end device to the receiving end device refers to the theoretical upper limit of time from the reference time of the sending end device to the reference time of the receiving end device.
  • This theoretical time limit may also be referred to as a theoretical time limit associated with the sender device. If the sender device is the ingress edge device 231, then the theoretical upper limit of time associated with the sender device may be referred to as the theoretical upper limit of time associated with the ingress edge device 231; if the sender device is the network device 232, then the theoretical upper limit of time associated with the sender device may be referred to as the theoretical time limit associated with network device 232 .
  • the theoretical upper limit of time n refers to the theoretical upper limit of the time of the network device through which the data packets pass from reference time 1 to reference time n+1, where n is a positive integer greater than or equal to 1.
  • the theoretical upper limit of time 1 is a theoretical upper limit of the time of the network device through which the data packet passes from the reference time 1 to the reference time 2 .
  • the theoretical upper limit of time 2 is the theoretical upper limit of the time of the network device through which the data packet passes from the reference time 1 to the reference time 3 .
  • the theoretical time cap 1 is equal to the theoretical time cap associated with the ingress edge device 231 ; the theoretical time cap 2 is equal to the sum of the theoretical time cap associated with the ingress edge device 231 and the theoretical time cap associated with the network device 232 .
  • the actual time n refers to the actual time that the data packet passes through the network device from the reference time 1 to the time n+1.
  • Time n+1 is the time when the data packet enters the queue system of the n+1th network device.
  • the moment when the data packet enters the queue system of the second network device ie, network device 232
  • the moment when the data packet enters the queue system of the third network device ie, network device 233
  • time 3 the moment when the data packet enters the queue system of the third network device.
  • Actual time 1 is the actual time that the data packet experiences within the network device from reference time 1 to time 2;
  • Actual time 2 is the actual time that the data packet experiences within the network device from reference time 1 to time 3.
  • the actual time associated with the sending end device refers to the actual time of the network device that the data packet passes through from the reference time of the sending end device to the time when the data packet enters the queue system of the receiving end device.
  • the sender device is the ingress edge device 231
  • the actual time associated with the ingress edge device 231 is the actual time that the data packet passes through the network device from reference time 1 to time 2.
  • the sender device is the network device 232
  • the actual time associated with the network device 232 is the actual time of the network device that the data packet passes through from the reference time 2 to the time 3.
  • the actual time 1 is equal to the actual time associated with the ingress edge device 23 .
  • Actual Time 2 is equal to the sum of Actual Time 1 and the actual time associated with network device 232.
  • the actual time 3 is equal to the sum of the actual time 2 and the actual time associated with the network device 233 .
  • the actual time does not include the transmission delay of transmitting packets between network devices.
  • the remaining processing time n is the difference between the theoretical upper limit of time n and the actual time n.
  • the remaining processing time 1 is the difference between the theoretical upper limit of time 1 and the actual time 1.
  • the remaining processing time associated with the sender device refers to the difference between the theoretical upper limit of time associated with the sender device and the actual time associated with the sender device.
  • the remaining processing time associated with the ingress edge device 231 is the difference between the theoretical upper time limit associated with the ingress edge device 231 and the actual time associated with the ingress edge device 231 .
  • the sending time n refers to the time when the data packet is output from the nth network device
  • the receiving time n refers to the time when the nth network device receives the data packet
  • sending time 1 refers to the time when the data packet is output from the first network device (ie, the ingress edge device 231 ), and sending time 2 refers to the time when the data packet is output from the second network device (ie, the network device 232 ).
  • the reception time 2 refers to the time when the network device 232 receives the data packet
  • the reception time 3 refers to the time when the network device 233 receives the data packet.
  • the accuracy of the moment is the same as the accuracy that can be recognized and expressed by network devices such as routers.
  • the actual occurrence time of the moment can be any time within the precision. For example, if a network device such as a router can identify and express with an accuracy of 1 nanosecond (ns), then the accuracy of the moment is also 1 ns. For example, if the transmission time 1 is 1 microsecond (microsecond, ⁇ s) 1 ns, the actual time of the transmission time 1 may be any time from 1 ⁇ s 1 ns to 1 ⁇ s 2 ns, such as 1 ⁇ s 1.1 ns or 1 ⁇ s 1.99 ns.
  • the accuracy of the time may be within a preset accuracy range. For example, if the preset accuracy is 10 ⁇ s, the time accuracy is also 10 ⁇ s.
  • the actual occurrence time of the moment can be any time within the precision. For example, if the transmission time 1 is 13 ⁇ s, the actual time of the transmission time 1 may be any time from 13 ⁇ s to 23 ⁇ s, such as 15 ⁇ s, 16 ⁇ s, or 18 ⁇ s.
  • Moments can also be represented by predefined numbers.
  • the time is divided into multiple moments according to the predefined granularity, and each moment is represented by a number. For example, 24 hours can be divided into 1440 moments with a granularity of 1 minute, and each moment is represented by an Arabic numeral, for example, 0 means 0:00 to 0:1, and 1 means 0:1 to 0:2 points, and so on.
  • the sending time 1 is 16
  • the actual time of sending time 1 can be any time from 0:16 to 0:17, such as 0:16:08, 0:16:31, 0:16 59 seconds etc.
  • the time information 1 is used to indicate the remaining processing time 1 , and the remaining processing time 1 is the difference between the theoretical upper limit of time 1 and the actual time 1 .
  • the start time of the theoretical upper limit of time 1 is the reference time 1, and the end time is the reference time 2.
  • the starting time of the actual time 1 is the reference time 1, and the ending time of the actual time 1 is the time 2.
  • the queue system of network device 232 includes a plurality of queues that are scheduled round-robin.
  • the multiple queues are always open and scheduled round robin.
  • the multiple queues are in one-to-one correspondence with multiple times. This moment may be referred to as a timestamp.
  • FIG. 5 shows the correspondence between M queues and M moments.
  • M is a positive integer greater than 3.
  • the time interval between any two adjacent queues from queue Q1 to queue QM shown in FIG. 5 is ⁇ .
  • the starting time is T
  • the time corresponding to queue Q1 is T+ ⁇
  • the time corresponding to queue Q2 is T+2 ⁇
  • the time corresponding to queue Q3 is T+3 ⁇
  • the corresponding time of queue QM is The time is T max .
  • the M queues continue to cycle at time intervals ⁇ . For example, the time interval between the time corresponding to the queue Q1 and the time corresponding to the queue Q M is ⁇ .
  • the time corresponding to the queue Q1 next time is T max + ⁇ .
  • the time interval between the time corresponding to the queue Q2 and the time corresponding to the queue Q1 is ⁇ , in other words, the time corresponding to the queue Q2 is T max +2 ⁇ , and so on.
  • the network device 232 determines which queue to add the packet to according to reference time 2 . It is assumed that the queue to which reference queue 2 needs to be added is called the target queue, and the time corresponding to the target queue is called the target time. Then, in some embodiments, the reference time 2 and the target time may satisfy the following relationship: the reference time 2 is not greater than the target time and the time corresponding to any queue is not included between the reference time and the target time. In other words, two situations may occur at reference time 2: Case 1, reference time 2 is between two time instants corresponding to the two queues. In this case, the target queue is the queue whose corresponding time is later than the reference time 2 . For example, taking FIG. 5 as an example, the reference time 2 is between T+ ⁇ and T+2 ⁇ .
  • the target time is queue Q2.
  • the reference time 2 is exactly the same as the time corresponding to a certain queue.
  • the target queue is the queue corresponding to the time at which the reference time 2 is located. Taking FIG. 5 as an example, assuming that the reference time 2 is T+2 ⁇ , then the target queue is the queue Q2.
  • the target time can be determined by: determining the sum of the reference time 2 and a preset duration.
  • the sum of reference time 2 and the preset duration may be referred to as reference time 2'.
  • the preset duration may be a preset duration, or may be ⁇ time intervals ⁇ , where ⁇ is a preset positive integer.
  • the reference time 2' and the target time satisfy the following relationship: the reference time 2' is not greater than the target time, and the time corresponding to any queue is not included between the reference time 2' and the target time. In other words, two situations may occur at the reference time 2': Case 1, the reference time 2' is between the two time corresponding to the two queues.
  • the target queue is the queue whose corresponding time is later than the reference time 2'.
  • the reference time 2' is between T+ ⁇ and T+2 ⁇ .
  • the target time is queue Q2.
  • the reference time 2' is exactly the same as the time corresponding to a certain queue.
  • the target queue is the queue corresponding to the time at which the reference time 2' is located.
  • the target queue is the queue Q2.
  • the value of ⁇ may be the same, and for data packets of different flows, the value of ⁇ may be different.
  • the queuing system can be implemented on the downlink board of the network device 232 , and can also be implemented on the uplink board of the network device 232 .
  • the unit in the network device 232 that implements the queuing system may be referred to as a queuing system unit.
  • FIG. 6 shows a sequence diagram of processing the packet by the ingress edge device 231 and the network device 232 .
  • FIG. 6 is a sequence diagram of implementing the queuing system on the downlink board.
  • the data packet completes the shaping process at reference time 1 and enters the output queue.
  • the data packet is output from the ingress edge device 231 at transmission time 1. in Figure 6.
  • the packet is input to the network device 232 at reception time 2.
  • the packet leaves the switch fabric of network device 232 at time 2 and enters the queue system.
  • the network device 232 selects a target queue from a plurality of queues included in the queue system according to the reference time 2 .
  • the packet is output from the network device 232 at transmission time 2.
  • dequeue unit Q and the queue system unit D shown in FIG. 6 and subsequent figures are only logically divided different units. In terms of specific device form, the two can be the same physical unit.
  • the reference time 1 is represented as E 1
  • the transmission time 1 is represented as t 1 out
  • the reception time 2 is represented as t 2 in
  • the time 2 is represented as t′ 2 in .
  • Reference time 2 is denoted as E 2
  • transmission time 2 is denoted as t 2 out .
  • the theoretical upper limit of time 1 is the theoretical upper limit of the time that the data packet experiences in the ingress edge device 231 and the network device 232 from the time E1 to the time E2 .
  • the theoretical upper limit of time 1 does not include the transmission delay of the data packet from the ingress edge device 231 to the network device 232 .
  • the actual time 1 is the actual time experienced by the data packet at the ingress edge device 231 and the network device 232 from time E 1 to time t' 2 in .
  • the actual time 1 does not include the transmission delay of the data packet from the ingress edge device 231 to the network device 232 .
  • FIG. 7 shows a sequence diagram of processing the packet by the ingress edge device 231 and the network device 232 .
  • FIG. 7 is a sequence diagram of implementing the queuing system on the upstream board.
  • the data packet completes the shaping process at reference time 1 and enters the output queue.
  • the data packet is output from the ingress edge device 231 at transmission time 1.
  • the packet is input to the network device 232 at reception time 2.
  • the packet enters the queue system at time 2.
  • the network device 232 selects a target queue from a plurality of queues included in the queue system according to the reference time 2 .
  • the data packet is output from the network device 232 at time transmission time 2 .
  • the reference time 1 is represented as E 1
  • the transmission time 1 is represented as t 1 out
  • the reception time 2 is represented as t 2 in
  • the time 2 is represented as t′ 2 in .
  • Reference time 2 is denoted as E 2
  • transmission time 2 is denoted as t 2 out .
  • the theoretical upper limit of time 1 is the theoretical upper limit of the time that the data packets experience in the ingress edge device 231 and the network device 232 from time E1 to time E2 .
  • the theoretical upper limit of time 1 does not include the transmission delay of the data packet from the ingress edge device 231 to the network device 232 .
  • the actual time 1 is the actual time experienced by the data packet at the ingress edge device 231 and the network device 232 from time E 1 to time t' 2 in .
  • the actual time 1 does not include the transmission delay of the data packet from the ingress edge device 231 to the network device 232 .
  • D 1 max is used below to represent the theoretical upper limit of time associated with the ingress edge device 231
  • D 1 r is used to represent the actual time associated with the ingress edge device 231 .
  • the actual time associated with the ingress edge device 231 can be expressed by the following formula:
  • D' 1 res The remaining processing time associated with the ingress edge device can be denoted by D' 1 res .
  • D' 1 res , D 1 max and D 1 r satisfy the following relationship:
  • Equation 4.2 If Equation 4.2 is brought into Equation 4.3, the following formula can be obtained:
  • the remaining processing time associated with the ingress edge device 231 is equal to the remaining processing time 1 .
  • the remaining processing time 1 time can be recorded as D 1 res .
  • D' 1 res D 1 res .
  • the time information 1 is used to indicate the remaining processing time 1 .
  • the network device 232 can learn t' 2 in and t 2 in by itself. Therefore, the time information 1 can indicate the difference between the theoretical upper limit of time 1 and the actual time 1 by indicating the difference between t 1 out and E 1 .
  • the time information 1 may include first time indication information.
  • the first time indication information is used to indicate the reference time of the sending end device to the time when the data packet is output from the sending end device. Subsequent network devices will also send time information including the first time indication information.
  • the first time indication information in the time information 1 is hereinafter referred to as the first time indication information 1 .
  • the sender device is the ingress edge device 231. Therefore, the first time indication information 1 is used to indicate the time from E 1 to t 1 out .
  • the first time indication information 1 includes a value at time t 1 out and a value at time E 1 .
  • the network device 232 can calculate the difference between t 1 out and E 1 by itself according to the value of time t 1 out and the value of time E 1 .
  • the first time indication information 1 may directly carry the difference between t 1 out and E 1 .
  • the ingress edge device 231 can calculate the difference between t 1 out and E 1 , and send the difference between t 1 out and E 1 to the network device 232 through the first time indication information 1 .
  • the time information 1 may further include second time indication information.
  • the second time indication information is used to indicate the accumulated remaining processing time. If the sender device is the nth network device in the network, the accumulated remaining processing time is the remaining processing time n-1. In other words, the accumulated remaining processing time is the difference between the theoretical upper limit of time n-1 and the actual time n-1.
  • the theoretical upper limit of time n-1 is the theoretical upper limit of the time of the network device through which the data packets pass from the reference time 1 to the reference time n.
  • the actual time is the actual time of the network device that the data packet passes through from reference time 1 to time n.
  • the upper limit of the theoretical processing time n-1 is the upper limit of the theoretical processing time of the network device that the data packet passes through from the reference time 1 to the reference time of the sender device, and the actual time n-1 is from the reference time 1 To the moment when the data packet enters the queue system of the sender device.
  • the second time indication information in the time information 1 is hereinafter referred to as the second time indication information 1 .
  • the start time and end time of the theoretical processing time upper limit 0 used to calculate the remaining processing time 0 are the same as reference time 1; the start time of the actual time 0 is reference time 1, and the end time is also reference time 1 (entry edge The enqueuing reference time in the device 231 is the same as the time of entering the queuing system). Therefore, the value indicated by the second time indication information 1 is 0.
  • the remaining processing time 0 can be represented by D 0 res .
  • the second time indication information may directly carry the indicated value, that is, 0.
  • the time information may not carry the second time indication information or the value of the second time indication information is a preset fixed value. If the time information received by the network device does not include the second time indication information or the value of the second time indication information is a preset fixed value, the network device can determine that the remaining processing time is up to the reference time of the sender device. value of 0.
  • the time information 1 may further include third time indication information.
  • the third time indication information is used to indicate the theoretical upper limit of time associated with the sending end device.
  • the third time indication information in the time information 1 is hereinafter referred to as the third time indication information 1.
  • the sending end device is the ingress edge device 231, and the receiving end device is the network device 232. Therefore, the third time indication information 1 is used to indicate the theoretical upper limit of the processing time of the network device through which the data packet passes from the reference time 1 to the reference time 2, that is, D 1 max .
  • D 1 max may be pre-configured in the network device 232 or a preset default value. In this case, the time information 1 may not need to include the third time indication information 1 for indicating D 1 max .
  • the network device 232 determines a target queue from a plurality of queues according to the time t 2 in and the time information 1, and adds the data packet to the target queue.
  • the network device 232 selects the target queue from the plurality of queues according to the reference time 2, ie, E 2 .
  • E 2 the reference time 2
  • E 2 is the sum of t' 2 in and the remaining processing time 1.
  • the remaining processing time 1 is the sum of the remaining processing time 0 indicated by the time information 1 and the remaining processing time associated with the ingress edge device 231, that is, the following formula is satisfied
  • the network device 232 can determine E 2 according to the time t 2 in and the received time information 1 .
  • the network device 232 sends the data packet to the next-hop network device (ie, the network device 233 ), and the network device 232 also sends the time information to the network device 233 .
  • the next-hop network device ie, the network device 233
  • the network device 232 may schedule the multiple queues according to the scheduling rule.
  • the scheduling rule adopted by the network device 232 is round-robin scheduling. In other words, multiple queue times are scheduled in turn. When a queue is polled, no other queues are allowed to be scheduled until they are drained. In order to achieve the above characteristics, a very large value (Quantum) can be set. Setting Quantum very large ensures that when a queue is polled, other queues are not allowed to be scheduled until they are drained.
  • the network device 232 polls the plurality of queues in a ring fashion. If the polled queue is not empty, the queue will be scheduled until the queue is empty; if the queue is empty, the queue will be skipped directly. If the downlink board is implemented, the scheduled data packets are input to the output queue, and the out-queue unit is responsible for processing. The subsequent processing method is the same as the existing processing method of the data packet. For brevity, it will not be repeated here. If it is implemented by the upstream board, the scheduled data packets will enter the switching system. The process of entering the switching system and subsequent processing is the same as the existing method of processing data packets, which will not be repeated here for brevity.
  • Time information 2 is different from time information 1.
  • Time information 2 is used to indicate remaining processing time 2 .
  • the remaining processing time 2 is the difference between the theoretical upper limit of time 2 and the actual time 2 .
  • the starting time of the theoretical time upper limit 2 is the reference time 1
  • the ending time of the theoretical time upper limit 2 is the reference time 3 .
  • Reference time 3 can be denoted by E3.
  • D 2 max represents the theoretical time upper limit associated with the network device 232 and D 1 max represents the theoretical time upper limit associated with the ingress edge device 231, then the theoretical time upper limit 2 is the sum of D 1 max and D 2 max .
  • the starting time of the actual time 2 is the reference time 1
  • the ending time of the actual time 2 is the time 3, which can be represented by t' 3 in .
  • the network device 233 can know t' 3 in and t 3 in by itself. t 3 in represents reception time 3 . Therefore, the time information 2 can indicate the remaining processing time 2 by indicating t 2 out and E 2 and the remaining processing time 1 , where t 2 out represents the transmission time 2 .
  • the time information 2 may include first time indication information.
  • the first time indication information included in the time information 2 may be referred to as the first time indication information 2 .
  • the sending end device is the network device 232, therefore, the first time indication information 2 is used to indicate E 2 to t 2 out .
  • the second time indication information 2 may carry the value at time t 2 out and the value at time E 2 , or may carry the difference between t 2 out and E 2 .
  • the time information 2 may include second time indication information.
  • the second time indication information included in the time information 2 may be referred to as the second time indication information 2 .
  • the sending end device is the network device 232. Therefore, the starting time of the theoretical processing time upper limit corresponding to the remaining processing time indicated by the second time indication information 2 is the reference time 1, and the ending time is the reference time 2; the remaining processing time indicated by the second time indication information 2 corresponds to The start time of the actual time is the reference time 1, and the end time is the time 2. It can be seen that the accumulated remaining processing time indicated by the second time indication information 2 is the remaining processing time 1, that is, D 1 res .
  • the time information 2 may include third time indication information.
  • the third time indication information included in the time information 2 may be referred to as the third time indication information 2 .
  • the sending end device is the network device 232
  • the receiving end device is the network device 233. Therefore, the third time indication information 2 is used to indicate the theoretical upper limit of processing time of the network device through which the data packet passes from reference time 2 to reference time 3, ie, the theoretical upper limit of time associated with the network device 232, ie D 2 max .
  • the time information 2 may not need to carry the third time indication information 2 .
  • the network device 233 determines a target queue from multiple queues according to the time t 3 in and the time information 2 and adds the data packet to the target queue.
  • the manner in which the network device 233 determines the target queue is similar to the manner in which the network device 232 determines the target queue.
  • the reference time 3 is the sum of time t' 3 in and the remaining processing time 2, that is
  • D 2 res can be determined according to the following formula:
  • D' 2 res can be determined according to the following formula:
  • the network device 233 determines E 3 according to the time t 3 in and the time information 2 . After the reference time 3 is determined, the network device 233 may determine the target queue according to the reference time 3, and add the data packet to the determined target queue.
  • the scheduling method for the target queue is the same as that in the above-mentioned embodiment, and for the sake of brevity, it will not be repeated here.
  • FIG. 8 shows a sequence diagram of processing the packet by the network device 232 and the network device 233 .
  • Figure 8 is a sequence diagram of implementing the queuing system on the downlink board.
  • FIG. 9 shows a sequence diagram of processing the packet by the network device 232 and the network device 233 .
  • FIG. 9 is a sequence diagram of implementing the queuing system on the upstream board.
  • the network device 233 may send the received data packet to the next-hop network device (ie, the network device 234).
  • Network device 233 may also send time information to network device 234 .
  • the network device 234 determines a target queue from multiple queues according to the reception time 4 and the received time information, and adds the data packet to the target queue.
  • step 406 is similar to that of step 404, and the specific implementation manner of step 407 is similar to that of step 405, which is not repeated here for brevity.
  • the network device 234 sends the data packet to the egress edge device 235 , and the egress edge device 235 sends the data packet to the user equipment 221 through the edge network 220 .
  • Network device 234 may also send time information to egress edge device 235 .
  • the egress edge device 235 may determine a target queue from a plurality of queues according to the reception time 5 and the received time information and add the data packet to the target queue.
  • step 408 is similar to that of step 404 and step 405, and for brevity, it is not repeated here.
  • FIG. 10 is a schematic diagram of D max corresponding to each network device in the core network 230 when a circular queue is implemented on the downlink board.
  • FIG. 11 is a schematic diagram of D max corresponding to each network device in the core network 230 when a circular queue is implemented on the uplink board.
  • D 0 max in Figures 10 and 11 represents the theoretical real-time upper limit of the shaping delay and the processing delay on the first hop. If E1 is set at the moment when the ingress edge device 231 receives the data packet. Then D 0 max is contained within D 1 max . D 5 max represents the theoretical upper limit of time from E5 to t 5 out (ie, the moment when the data packet is output from the egress edge device 235).
  • Case 1 Strictly wait for each hop. If a packet encounters the worst case at each hop, then each hop will experience the corresponding Dmax on that hop. In this case, the total delay from end-to-end (ie, from the time the data packet is received from the ingress edge device 231 until the data packet is sent out from the egress edge device 235) is capped at the sum of all Dmax in FIG. 10 or 11 .
  • Case 2 If the data packet does not experience the worst case on a hop device, that is, the actual experienced delay is less than the D max corresponding to the hop device, then the remaining time of the data packet will be recorded as the delay on the subsequent hop. Entry criteria. If there is congestion on a certain device, the remaining time D res of the data packet sent in advance is larger than that of the data packet that has undergone strict waiting, so the queue to which the data packet sent in advance is added will be scheduled later; The packet will only be dispatched after waiting for enough time. At this time, it is equivalent to a strict hop-by-hop wait before being sent, so it must be sent out within D max .
  • the network device 233 receives the data packet 1 and the data packet 2, and the data packet 1 is a strictly waiting packet. Then the value of D 2 res of packet 1 is 0. Packet 2 is a message sent in advance, so D 2 res of packet 2 is greater than 0.
  • the network device 232 can directly select the target queue for the data packet 1 according to the time when the data packet 1 is received, but needs to select the target queue for the data packet 2 according to the sum of the time when the data packet 2 is received and the D 2 res of the data packet 2 . Referring to FIG. 5 , it is possible that the target queue for packet 1 is queue Q 1 , and the target queue for packet 2 is queue Q 2 . In this case, the priority of packet 2 is lower than the priority of packet 1. So packet 2 is scheduled later than packet 1.
  • the technical solution of the present application can ensure that the upper bound of the end-to-end delay of any flow does not exceed the sum of D max of all outgoing interfaces on the flow path.
  • FIG. 12 is a schematic flowchart of a method for scheduling data packets according to an embodiment of the present application.
  • the first network device receives a data packet from a second network device in the network at the first moment.
  • the first network device determines a first reference time according to the time information carried by the data packet and the first time, where the first reference time is the queue in which the data packet enters the first queue system of the first network device reference time.
  • the first network device determines a target queue from a plurality of queues included in the first queue system according to the first reference time and adds the data packet to the target queue, where the time information is used to indicate the first remaining processing time , the first remaining processing time is the difference between the first theoretical upper limit of time for processing the data packet by N network devices and the first actual time, the N network devices including the data packet after entering the network to the time before the first network device
  • the network equipment that passes through, N is a positive integer greater than or equal to 1
  • the first theoretical upper limit of time is the upper theoretical upper limit of the time the data packet has experienced inside the network equipment from the initial reference time to the first reference time
  • the first theoretical time limit is An actual time is the actual time that the data packet has experienced inside the network device from the initial reference time to the second time
  • the initial reference time is the data packet entering the queue of the first network device among the N network devices
  • the reference time of the system, the second time is the time when the data packet enters the first queuing system.
  • the first network device processes the target queue according to the scheduling rules of the multiple queues.
  • the first network device may be the network device 232 in the above embodiment.
  • the second network device is the ingress edge device 231 .
  • the first network device is the network device 233 in the foregoing embodiment.
  • the second network device is network device 232 .
  • the first network device is the network device 234 in the foregoing embodiment.
  • the second network device is network device 233 .
  • the first network device is the network device 232
  • the first time is t 2 in
  • the second time is t' 2 in
  • the first reference time is E 2
  • the initial reference time is E 1
  • the first remaining time The processing time is D 1 res .
  • the time information includes first time indication information, and the first time indication information is used to indicate the time from the second reference time to the second output time, and the second reference time is when the data packet is in the first time.
  • Two reference time of the network device the second output time is the time when the data packet is output from the second network device.
  • the first network device is the network device 232
  • the second reference time is E 1
  • the second output time is t 1 out .
  • the time information further includes second time indication information, where the second time indication information is used to indicate a second remaining processing time, the second remaining time
  • the processing time is the difference between the second theoretical upper limit of time and the second actual time.
  • the second theoretical upper limit of time is the upper theoretical upper limit of the time that the data packet has experienced inside the network device from the initial reference time to the second reference time.
  • the second actual time is the actual time that the data packet has experienced inside the network device from the initial reference time to the time when the data packet enters the queue system of the second network device.
  • the first network device is network device 232 and the second remaining processing time is D 0 res .
  • the time information further includes third time indication information, where the third time indication information is used to indicate a third theoretical time upper limit associated with the second network device, and the third theoretical time upper limit is determined from the third time limit. The theoretical upper limit of the time that the data packet experiences inside the network device from the second reference time to the first reference time.
  • the first network device is the network device 232
  • the third theoretical upper limit of time is D 1 max .
  • the plurality of queues are in one-to-one correspondence with a plurality of preset times
  • the first network device determines a target queue from a plurality of queues included in the first queuing system according to the first reference time, including: the first A network device determines the queue corresponding to the target time as the target queue according to the first reference time, wherein the first reference time is not greater than the target time, and the first reference time and the target time do not include the plurality of Any one of the preset times, the target time is one of the multiple preset times.
  • the first network device determining the first reference moment according to the time information and the first moment includes: the first network device determining the first reference moment according to the following formula:
  • E h+1 represents the first reference time
  • D h res represents the second remaining processing time
  • D h max represents the third theoretical upper limit of processing time associated with the second network device
  • t h out represents the second output time
  • E h represents the second reference time
  • t h+1 in represents the first time.
  • the first network device determining the first reference time according to the time information and the first time includes: the first network device determining the first reference time according to the second time, the time information and the first time the first reference time.
  • the first network device determining the first reference time according to the second time, the time information and the first time includes: the first network device determining the third remaining processing time according to the following formula:
  • D h+1 represents the third remaining processing time
  • D h res represents the second remaining processing time
  • D h max represents the third theoretical processing time upper limit associated with the second network device
  • t h out represents the second output moment
  • E h represents the second reference time
  • t h+1 in represents the first time
  • t' h+1 in represents the second time
  • the first network device determines the sum of the third remaining processing time and the second time is the first reference time.
  • FIG. 13 is a schematic structural block diagram of a network device provided according to an embodiment of the present application.
  • the network device 1300 shown in FIG. 13 includes a receiving unit 13011301 and a processing unit 13021302.
  • the receiving unit 1301 is configured to receive a data packet from a second network device in the network at the first moment.
  • the processing unit 1302 is configured to determine a first reference moment according to the time information carried by the data packet and the first moment, where the first reference moment is the reference moment when the data packet enters the queue in the first queue system of the network device .
  • the processing unit 1302 is further configured to determine a target queue from a plurality of queues included in the first queue system according to the first reference time and add the data packet to the target queue, wherein the time information is used to indicate the first remaining processing time , the first remaining processing time is the difference between the first theoretical upper limit of time and the first actual time for the N network devices to process the data packet, and the N network devices include the time elapsed after the data packet enters the network and before the network device Network equipment, N is a positive integer greater than or equal to 1, the first theoretical upper limit of time is the upper theoretical upper limit of the time that the data packet experiences in the network equipment from the initial reference time to the first reference time, the first actual time limit The time is the actual time that the data packet has experienced inside the network device from the initial reference time to the second time, and the initial reference time is the time when the data packet enters the queue system of the first network device among the N network devices. Reference time, the second time is the time when the data packet enters the first queuing system.
  • the processing unit 1302 is further configured to process the target queue according to the scheduling rules of the multiple queues.
  • the network device 1300 may further include a sending unit 1303 .
  • the sending unit 1303 may send the data packet to the next device according to the scheduling result of the processing unit 1302 .
  • the receiving unit 1301 and the transmitting unit 1303 may be implemented by a receiver.
  • the processing unit 1302 may be implemented by a processor.
  • For the specific functions and beneficial effects of the receiving unit 1301 , the processing unit 1302 and the sending unit 1303 reference may be made to the foregoing embodiments, which will not be repeated here for brevity.
  • the embodiment of the present application also provides a processing apparatus, including a processor and an interface.
  • the processor may be used to execute the methods in the above method embodiments.
  • the above processing device may be a chip.
  • the processing device may be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a system on chip (SoC), or a It is a central processing unit (CPU), a network processor (NP), a digital signal processing circuit (DSP), or a microcontroller (microcontroller unit). , MCU), it can also be a programmable logic device (PLD), other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or other integrated chips.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • SoC system on chip
  • MCU microcontroller unit
  • MCU microcontroller unit
  • PLD programmable logic device
  • PLD programmable logic device
  • each step of the above-mentioned method can be completed by an integrated logic circuit of hardware in a processor or an instruction or program code in the form of software.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, detailed description is omitted here.
  • the processor in this embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • the steps of the above method embodiments may be completed by hardware integrated logic circuits in the processor or instructions or program codes in the form of software.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memory in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • the present application also provides a computer program product, the computer program product includes: computer program code, when the computer program code is run on a computer, the computer is made to execute any one of the above embodiments. method.
  • the present application also provides a computer-readable medium, where the computer-readable medium stores program codes, and when the program codes run on a computer, causes the computer to execute any one of the above-mentioned embodiments. method.
  • the present application further provides a network system, which includes the foregoing multiple network devices.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供一种调度数据包的方法和相关装置,该方法包括:第一网络设备在第一时刻接收来自于网络中的第二网络设备的数据包;该第一网络设备根据该数据包携带的时间信息和该第一时刻,确定第一参考时刻;该第一网络设备根据该第一参考时刻从该第一队列系统包括的多个队列中确定目标队列并将该数据包加入该目标队列;该第一网络设备根据该多个队列的调度规则,对该目标队列进行处理。利用上述技术方案可以使得任意流的端到端时延上界不超过该流路径上所有出接口的理论时间上限之和。这样端到端的时延是可控的。

Description

调度数据包的方法和相关装置
本申请要求于2020年07月31日提交中国专利局、申请号为202010760185.1、申请名称为“调度数据包的方法和相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,更具体地,涉及调度数据包的方法和相关装置。
背景技术
确定性时延是指数据包在服从一定突发性要求的前提下,数据包传输所经历的时延(delay)及抖动(jitter)满足上界。如果要满足数据包端到端的确定性时延及抖动,就需要实现规模可扩展的数据面确定性数据包调度。
现有的调度方法(例如加权公平队列(weighted fair queue)、最早截止时间优先(earliest deadline first,DEF)等)都无法满足确定性时延的需求。
发明内容
本申请提供一种调度数据包的方法和相关装置,可以使得端到端的时延可控。
第一方面,本申请实施例提供一种调度数据包的方法,包括:第一网络设备在第一时刻接收来自于网络中的第二网络设备的数据包;该第一网络设备根据该数据包携带的时间信息和该第一时刻,确定第一参考时刻,该第一参考时刻为该数据包进入该第一网络设备的第一队列系统中的队列的参考时刻;该第一网络设备根据该第一参考时刻从该第一队列系统包括的多个队列中确定目标队列并将该数据包加入该目标队列,其中该时间信息用于指示第一剩余处理时间,该第一剩余处理时间为N个网络设备处理该数据包的第一理论时间上限和第一实际时间的差,该N个网络设备包括该数据包进入该网络后到该第一网络设备之前经过的网络设备,N为大于或等于1的正整数,该第一理论时间上限为从初始参考时刻开始至该第一参考时刻为止的该数据包在网络设备内部经历的理论时间上限,该第一实际时间为从该初始参考时刻开始至第二时刻为止该数据包在网络设备内部经历的实际时间,该初始参考时刻为该数据包进入该N个网络设备中的第一个网络设备的队列系统的参考时刻,该第二时刻为该数据包进入该第一队列系统的时刻;该第一网络设备根据该多个队列的调度规则,对该目标队列进行处理。
利用上述技术方案可以使得任意流的端到端时延上界不超过该流路径上所有出接口的理论时间上限之和。这样端到端的时延是可控的。因此,可以为数据流提供确定性的端到端时延。
结合第一方面,在一种可能的设计中,该时间信息包括第一时间指示信息,该第一时间指示信息用于指示从第二参考时刻到第二输出时刻的时间,该第二参考时刻是该数据包 在该第二网络设备的参考时刻,该第二输出时刻是该数据包从该第二网络设备输出的时刻。
结合第一方面,在一种可能的设计中,在N为大于或等于2的正整数的情况下,该时间信息还包括第二时间指示信息,该第二时间指示信息用于指示第二剩余处理时间,该第二剩余处理时间为第二理论时间上限与第二实际时间的差,该第二理论时间上限为从该初始参考时刻开始至该第二参考时刻为止该数据包在网络设备内部经历的理论时间上限,该第二实际时间为从该初始参考时刻开始至该数据包进入该第二网络设备的队列系统的时刻为止的该数据包在网络设备内部经历的实际时间。
结合第一方面,在一种可能的设计中,该时间信息还包括第三时间指示信息,该第三时间指示信息用于指示关联于该第二网络设备的第三理论时间上限,该第三理论时间上限是从该第二参考时刻开始至该第一参考时刻为止该数据包在网络设备内部经历的理论时间上限。
结合第一方面,在一种可能的设计中,该多个队列与多个预设时刻一一对应,该第一网络设备根据该第一参考时刻从第一队列系统包括的多个队列中确定目标队列,包括:该第一网络设备根据该第一参考时刻,确定目标时刻对应的队列为该目标队列,其中该第一参考时刻不大于该目标时刻,且该第一参考时刻与该目标时刻之间不包括该多个预设时刻中的任一个时刻,该目标时刻是该多个预设时刻中的一个时刻。
结合第一方面,在一种可能的设计中,该第一网络设备根据该时间信息和该第一时刻,确定第一参考时刻,包括:该第一网络设备根据以下公式确定该第一参考时刻:E h+1=D h res+[(D h max)-(t h out-E h)]+t h+1 in,E h+1表示该第一参考时刻,D h res表示第二剩余处理时间,D h max表示关联于该第二网络设备的第三理论处理时间上限,t h out表示第二输出时刻,E h表示第二参考时刻,t h+1 in表示该第一时刻。该第一参考时刻为该数据包进入该第一网络设备的第一队列系统中的队列的参考时刻。该第二剩余处理时间为第二理论时间上限与第二实际时间的差,该第二理论时间上限为从该初始参考时刻开始至该第二参考时刻为止该数据包在网络设备内部经历的理论时间上限,该第二实际时间为从该初始参考时刻开始至该数据包进入该第二网络设备的队列系统的时刻为止的该数据包在网络设备内部经历的实际时间。该第二输出时刻是该数据包从该第二网络设备输出的时刻。该第二参考时刻是该数据包在该第二网络设备的参考时刻。该第一时刻是该第一网络设备接收到该数据包的时刻。
结合第一方面,在一种可能的设计中,该第一网络设备根据该时间信息和该第一时刻,确定第一参考时刻,包括:该第一网络设备根据该第二时刻、该时间信息和该第一时刻,确定该第一参考时刻。
结合第一方面,在一种可能的设计中,该第一网络设备根据第二时刻、该时间信息和该第一时刻,确定该第一参考时刻,包括:该第一网络设备根据以下公式确定第三剩余处理时间:D h+1 res=D h res+[D h max-(t h out-E h)-(t’ h+1 in-t h+1 in)],D h+1表示该第三剩余处理时间,D h res表示第二剩余处理时间,D h max表示关联于该第二网络设备的第三理论处理时间上限,t h out表示第二输出时刻,E h表示第二参考时刻,t h+1 in表示该第一时刻,t’ h+1 in表示该第二时刻;该第一网络设备确定该第三剩余处理时间与该第二时刻的和为该第一参考时刻。该第二剩余处理时间为第二理论时间上限与第二实际时间的差,该第二理论时间上限为从该初始参 考时刻开始至该第二参考时刻为止该数据包在网络设备内部经历的理论时间上限,该第二实际时间为从该初始参考时刻开始至该数据包进入该第二网络设备的队列系统的时刻为止的该数据包在网络设备内部经历的实际时间。该第二输出时刻是该数据包从该第二网络设备输出的时刻。该第二参考时刻是该数据包在该第二网络设备的参考时刻。该第一时刻是该第一网络设备接收到该数据包的时刻。该第二时刻为该数据包进入该第一队列系统的时刻。
第二方面,本申请实施例提供了一种网络设备,该网络设备包括用于实现第一方面或第一方面的任一种可能的设计的单元。
第三方面,本申请实施例提供一种网络设备,包括:处理器,用于执行存储器中存储的程序,当该程序被执行时,使得该网络设备执行上述第一方面或第一方面的任一种可能的设计的方法。
结合第三方面,在一种可能的设计中,该存储器位于所述网络设备之外。
第四方面,本申请实施例提供一种计算机可读存储介质,包括指令,当该指令在计算机上运行时,如上述第一方面或第一方面的任一种可能的设计的方法被执行。
第五方面,本申请实施例提供一种网络设备,该网络设备包括处理器、存储器以及存储在该存储器上并可在该处理器上运行的指令,当该指令被运行时,使得该网络设备执行如上述第一方面或第一方面的任一种可能的设计的方法。
附图说明
图1是突发累计形成原因的示意图。
图2是能够应用本申请实施例的系统的示意图。
图3是能够实现本申请实施例的路由器的示意性结构框图。
图4是根据本申请实施例提供的调度数据包的方法的示意性流程图。
图5示出了M个队列和M个时刻的对应关系。
图6示出了入口边缘设备231和网络设备232处理该报文的时序图。
图7示出了入口边缘设备231和网络设备232处理该报文的时序图。
图8示出了网络设备232和网络设备233处理该报文的时序图。
图9示出了网络设备232和网络设备233处理该报文的时序图。
图10是在下行板上实现循环队列时核心网络230中各个网络设备对应的D max的示意图。
图11是在上行板上实现循环队列时核心网络230中各个网络设备对应的D max的示意图。
图12是根据本申请实施例提供是一种调度数据包的方法的示意性流程图。
图13是根据本申请实施例提供的一种网络设备的示意性结构框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
本申请将围绕可包括多个设备、组件、模块等的系统来呈现各个方面、实施例或特征。应当理解和明白的是,各个系统可以包括另外的设备、组件、模块等,并且/或者可以并 不包括结合附图讨论的所有设备、组件、模块等。此外,还可以使用这些方案的组合。
另外,在本申请实施例中,“示例的”、“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用示例的一词旨在以具体方式呈现概念。
本申请实施例中,“相应的(corresponding,relevant)”和“对应的(corresponding)”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。
本申请实施例中,有时候下标如W 1可能会笔误为非下标的形式如W1,在不强调其区别时,其所要表达的含义是一致的。
本申请实施例描述的网络架构以及业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
突发累计
互联网协议(Internet Protocol,IP)网络中,由于突发累积的存在,导致其无法为某条流提供确定性的端到端时延和抖动。
突发累积是导致时延不确定的根本原因。突发累积形成的原因是不同数据包之间的相互挤压。
图1是突发累计形成原因的示意图。
如图1中三条流(流1、流2和流3)同时到达节点101的时候是完全均匀的,由于节点101只能线速处理数据包,导致流2收到其他两条流的挤压,从而两个连续数据包紧挨在了一起,突发度增加。以上过程若干次循环之后,会导致某跳的流形成一个难以预测大突发,大突发进一步会挤压其他流,导致其他流的时延增加,并且难以预测。微突发逐跳累计是时延不确定性的根本原因。现有解决上述问题的方法要么依赖于全文设备的时间同步,要么对传输距离有限制,很难是用于大规模IP网络。
图2是能够应用本申请实施例的系统的示意图。如图2所示的网络200可以由边缘网络210、边缘网络220和核心网络230组成。
边缘网络210中包括用户设备211。边缘网络220包括用户设备221。核心网络230包括入口边缘(ingress edge)设备231、网络设备232、网络设备233、网络设备234和出口边缘(egress edge)设备235。
如图2所示,用户设备211可以通过核心网络与用户设备221进行通信。
本申请实施例可以由核心网络230中的设备实现。例如,可以由入口边缘设备231实现,可以由网络设备232至网络设备234实现。
能够实现本申请实施例的设备可以是路由器、交换机等。
图3是能够实现本申请实施例的路由器的示意性结构框图。如图3所示,路由器300包括上行板301、交换结构302和下行板303。
上行板也可以称为上行接口板。上行板301可以包括多个输入端口。上行板可以输入端口接收到的数据包进行拆封等处理,利用转发表查找输出端口。一旦查找到输出端口(为了便于描述,以下将查找到的输出端口称为目标输出端口),数据包就会被发送至交换结构302。
本申请实施例中所称数据包可以是网络层中的数据包(packet),也可以是数据链路层中的帧(frame)。
交换结构302将接收到的数据包转发到一个该目标输出端口。具体地,交换结构302将接收到的数据包转发到包括该目标输出端口的下行板303上。下行板也可以称为下行接口板。下行板303中包括多个输出端口。下行板303接收来自于交换结构302的数据包。下行板可以对接收到的数据包进行缓存管理、封装等处理,然后通过该目标输出端口将该数据包发送至下一节点。
可以理解的是,如图3所示的路由器仅示出了一个上行板301和一个下行板303。在一些实施例中,路由器可以包括多个上行板和/或多个下行板。
除非特殊说明,本申请实施例中所称的时间(例如实际时间、最大时间等)等都是指数据包在网络设备内部的时间,并不包括网络设备之间传输数据包的时间。
图4是根据本申请实施例提供的调度数据包的方法的示意性流程图。图4是结合图2对本申请实施例提供的调度数据包的方法进行描述的。假设本申请实施例中调度数据包的方法应用于图2中的核心网络230中。
入口边缘设备231可以接收到多个流。入口边缘设备231对多个流中的每个流的处理方式是相同的。假设入口边缘设备231接收到的多个流的路径是依次经过:入口边缘设备231、网络设备232、网络设备233、网络设备234和出口边缘设备235。入口边缘设备231是该多个流进入核心网络230中的第一个网络设备。因此,入口边缘设备231也可以称为第一跳的网络设备或者首跳网络设备。相应的,网络设备232是第二跳的网络设备,网络设备233是第三跳的网络设备,网络设备234是第四跳的网络设备,出口边缘设备235是第五跳的网络设备。
对于多个流中的第i条流,路径中的每个网络设备的输出端口为第i条流预留的平均带宽为r i。多个流满足流量模型,该流量模型可以通过以下公式表示:
A i(t)=r i×t+B i,(公式4.1)
其中,t为时间;A i(t)为t时间内第i条流的数据总流量;B i为第i条流的最大突发度。
为了便于描述,以下以目标流为例对网络中的网络设备如何处理接收到流进行描述。该目标流是入口边缘设备231接收到的多个流中的任一个。
401,入口边缘设备231可以对目标流进行整形,使得进入核心网络230的目标流满足如公式4.1所示的流量模型的要求。
入口边缘设备231对目标流进行整形的方式可以利用现有的整形算法,例如漏桶(leaky bucket)算法、令牌桶(token bucket)算法等。
入口边缘设备231中负责对目标流进行整形的单元可以称为整形单元。
402,入口边缘设备231将目标流中的数据包发送至下一跳的网络设备(即网络设备232)。入口边缘设备231还将时间信息发送至网络设备232。
经过整形处理后的数据包会进入输出队列,并通过对输出队列进行调度可以将该数据包发送至网络设备232。入口边缘设备231中负载管理输出队列的单元可以称为出队列单元,负责对输出队列调度的单元可以称为调度单元。
为了便于描述,以下将入口边缘设备231向网络设备232发送的时间信息称为时间信息1。
为了便于更好地理解本申请的技术方案,以下对本申请技术方案涉及到的一些概念进行介绍。
1,参考时刻
核心网络230中除入口边缘设备231之外的其他网络设备(例如,网络设备232、网络设备233、网络设备234和出口边缘设备235)中都有一个轮询调度的队列系统,该队列系统中包括多个队列。网络设备会根据一个时刻确定将接收到的数据包加入到哪个队列之中。这个时刻可以称为参考时刻。
另外,在一些实施例中,入口边缘设备231可以也有一个轮询调度的队列系统。在此情况下,入口边缘设备231也可以根据一个时刻确定将接收到的数据包加入到哪个队列之中。这个时刻也可以称为参考时刻。
在另一些实施例中,入口边缘设备231可以没有这个队列系统。。在此情况下,入口边缘设备231在输出数据包的时候会先把数据包输入到输出队列系统。数据包进入输出队列系统的时刻也可以称为参考时刻。
在另一些实施例中,无论入口边缘设备231是否存在上述轮询调度的队列系统,入口边缘设备231的参考时刻也可以是其他时刻。例如,入口边缘设备231的参考时刻可以是接收到数据包的时刻。又如入口边缘设备231的参考时刻可以是数据包从入口边缘设备231输出的时刻。
为了便于描述,以下实施例中假设入口边缘设备231没有轮询调度的队列系统,入口边缘设备231的参考时刻是进入到输出队列系统的时刻。
网络中的第n个网络设备(n为大于或等于1的正整数)用于确定将数据包加入队列系统中的队列(或队列系统)的参考时刻可以称为参考时刻n。例如,参考时刻1是数据包进入到入口边缘设备的(输出)队列系统的参考时刻;参考时刻2是数据包进入网络设备232的队列系统中的队列的参考时刻;参考时刻3是数据包进入网络设备233的队列系统中的队列的参考时刻。
2,发送端设备和接收端设备
发送端设备是发送数据包的网络设备,接收端设备是接收数据包的网络设备。例如,如果入口边缘设备231将数据包发送至网络设备232,那么发送端设备是入口边缘设备231,接收端设备是网络设备232。又如,如果网络设备232将数据包发送至网络设备233,那么网络设备232是发送端设备,网络设备233是接收端设备。
另外,发送端设备的参考时刻是数据包进入发送端设备的队列系统中的队列的参考时刻。相应的,接收端设备的参考时刻是指数据包进入到接收端设备的队列系统中的队列的参考时刻。例如,如果发送端设备是入口边缘设备231,那么发送端设备的参考时刻是参考时刻1;如果发送端设备是网络设备232,那么发送端设备的参考时刻是参考时刻2。又如,如果接收端设备是网络设备232,那么接收端设备的参考时刻是参考时刻2;如果接收端设备是网络设备233,那么接收端设备的参考时刻是参考时刻3。
3,理论时间上限
理论时间上限基于网络演算(network calculus)理论计算得到网络设备处理数据包需要的最大时间。换句话说,网络设备处理数据包的时间不会大于理论时间上限。理论时间上限并不包括网络设备之间传输数据包的传输时延。
从发送端设备到接收端设备的理论时间上限是指从发送端设备的参考时刻到接收端设备的参考时刻的理论时间上限。这个理论时间上限也可以称为关联于发送端设备的理论时间上限。如果发送端设备是入口边缘设备231,那么关联于发送端设备的理论时间上限可以称为关联于与入口边缘设备231的理论时间上限;如果发送端设备是网络设备232,那么关联于发送端设备的理论时间上限可以称为关联于网络设备232的理论时间上限。
理论时间上限n是指从参考时刻1开始到参考时刻n+1为止的数据包经过的网络设备的理论时间上限,其中n为大于或等于1的正整数。例如,理论时间上限1是从参考时刻1开始到参考时刻2为止的数据包经过的网络设备的理论时间上限。又如,理论时间上限2是从参考时刻1开始到参考时刻3为止数据包经过的网络设备的理论时间上限。
理论时间上限1与关联于入口边缘设备231的理论时间上限相等;理论时间上限2等于关联于入口边缘设备231的理论时间上限和关联于网络设备232的理论时间上限之和。
4,实际时间
实际时间n是指从参考时刻1开始到时刻n+1为止数据包在网络设备内部经历的实际时间。时刻n+1是数据包进入第n+1个网络设备的队列系统的时刻。例如数据包进入第2个网络设备(即网络设备232)的队列系统的时刻可以称为时刻2,数据包进入第3个网络设备(即网络设备233)的队列系统的时刻可以称为时刻3。
实际时间1是参考时刻1开始到时刻2为止,数据包在网络设备内部经历的实际时间;实际时间2是参考时刻1开始到时刻3为止,数据包在网络设备内部经历的实际时间。
关联于发送端设备的实际时间是指从发送端设备的参考时刻开始到数据包进入接收端设备的队列系统的时刻为止,数据包经过的网络设备的实际时间。例如,假设发送端设备是入口边缘设备231,那么关联于入口边缘设备231的实际时间是从参考时刻1开始到时刻2为止数据包经过网络设备的实际时间。又如,假设发送端设备是网络设备232,那么关联于网络设备232的实际时间是从参考时刻2开始到时刻3为止数据包经过的网络设备的实际时间。
可以看出,实际时间1等于关联于入口边缘设备23的实际时间。实际时间2等于实 际时间1与关联于网络设备232的实际时间之和。实际时间3等于实际时间2与关联于网络设备233的实际时间之和。
与理论时间上限类似,实际时间也不包括网络设备之间传输数据包的传输时延。
5,剩余处理时间
剩余处理时间n是理论时间上限n与实际时间n的差。例如,剩余处理时间1是理论时间上限1与实际时间1的差。
关联于发送端设备的剩余处理时间是指关联于发送端设备的理论时间上限与关联于发送端设备的实际时间的差。例如,关联于入口边缘设备231的剩余处理时间是关联于入口边缘设备231的理论时间上限与关联于入口边缘设备231的实际时间的差。
6,发送时刻和接收时刻
发送时刻n是指数据包从第n个网络设备输出的时刻,接收时刻n是指第n个网络设备接收到数据包的时刻。
例如,发送时刻1是指数据包从第1个网络设备(即入口边缘设备231)输出的时刻,发送时刻2是指数据包从第2个网络设备(即网络设备232)输出的时刻。
又如,接收时刻2是指网络设备232接收到数据包的时刻,接收时刻3是指网络设备233接收到数据包的时刻。
本领域技术人员可以理解,路由器等网络设备能够识别和表达的时间精度是有限的。时刻的精度与路由器等网络设备能够识别和表达的精度是相同的。时刻的实际发生时间可以是在精度范围内的任一个时间。例如,如果路由器等网络设备能够识别和表达的精度为1纳秒(nanosecond,ns),那么时刻的精度也是1ns。例如,如果发送时刻1为1微秒(microsecond,μs)1ns,那么发送时刻1的实际时间可以是1μs1ns到1μs2ns中的任一个时间,例如1μs1.1ns或者1μs1.99ns等。
时刻的精度也可以为预先设定的精度范围。例如,如果预先设定的精度为10μs,那么时刻精度也是10μs。时刻的实际发生时间可以是在精度范围内的任一个时间。例如,如果发送时刻1是13μs,那么该发送时刻1的实际时间可以是13μs到23μs中的任一个时间,例如15μs、16μs或者18μs等。
时刻也可以通过预先定义的编号来表示。将时间按照预定义的粒度划分为多个时刻,每个时刻用一个编号来表示。例如,可以以1分钟为粒度将24小时划分为1440个时刻,每个时刻用一个阿拉伯数字编号表示,例如0表示0时0分到0时1分,1表示0时1分到0时2分,以此类推。如果发送时刻1为16,那么发送时刻1的实际时间可以是0时16分到0时17分中的任一个时间,例如0时16分08秒、0时16分31秒、0时16分59秒等。
时间信息1用于指示剩余处理时间1,剩余处理时间1是理论时间上限1与实际时间1的差。
如上所述,理论时间上限1的起始时刻参考时刻1,结束时刻为参考时刻2。实际时zai间1的起始时刻为参考时刻1,实际时间1的结束时刻为时刻2。
在最差的情况下,理论时间上限1与实际时间1的差为0。
网络设备232的队列系统包括多个队列,该多个队列被轮询调度。该多个队列永远处于开启状态,并轮询调度。该多个队列与多个时刻一一对应。这个时刻可以称为时间标签 (stamp)。
图5示出了M个队列和M个时刻的对应关系。在图5所示的实施例中,M为大于3的正整数。
如图5所示的队列Q1至队列Q M中的任意两个相邻队列(例如队列Q1和队列Q2,队列Q2和队列Q3)之间的时间间隔为Δ。假设起始时刻为T,那么队列Q1对应的时刻为T+Δ,队列Q2对应的时刻为T+2×Δ,队列Q3对应的时刻为T+3×Δ,以此类推,队列Q M对应的时刻为T max。在系统时间超过时刻T max之后,该M个队列继续以时间间隔Δ循环。例如,队列Q1对应的时刻与队列Q M对应的时刻的时间间隔为Δ,换句话说,当队列Q1本次被调度完毕后,队列Q1下次对应的时刻为T max+Δ。类似的,队列Q2对应的时刻与队列Q1对应的时刻的时间间隔为Δ,换句话说,队列Q2对应的时刻为T max+2×Δ,以此类推。
网络设备232根据参考时刻2确定将该数据包加入哪个队列。假设参考队列2需要加入的队列称为目标队列,该目标队列对应的时刻称为目标时刻。那么,在一些实施例中,参考时刻2和目标时刻可以满足如下关系:参考时刻2不大于该目标时刻且参考时刻与目标时刻之间不包括任一个队列对应的时刻。换句话说,参考时刻2可能出现两种情况:情况1,参考时刻2在两个队列对应的两个时刻之间。在此情况下,目标队列为对应的时刻晚于参考时刻2的队列。例如,以图5为例,参考时刻2在T+Δ和T+2×Δ之间。那么目标时刻为队列Q2。情况2,参考时刻2恰好与某个队列对应的时刻相同。在此情况下,目标队列是参考时刻2所在的时刻对应的队列。还以图5为例,假设参考时刻2为T+2×Δ,那么目标队列为队列Q2。
在另一些实施例中,目标时刻可以通过以下方式确定:确定参考时刻2与一个预设时长的和。为了便于描述,参考时刻2与该预设时长的和可以称为参考时刻2’。该预设时长可以是一个预先设置的时长,也可以是α个时间间隔Δ,其中α是一个预设的正整数。参考时刻2’与目标时刻满足以下关系:参考时刻2’不大于该目标时刻且参考时刻2’与目标时刻之间不包括任一个队列对应的时刻。换句话说,参考时刻2’可能出现两种情况:情况1,参考时刻2’在两个队列对应的两个时刻之间。在此情况下,目标队列为对应的时刻晚于参考时刻2’的队列。例如,以图5为例,参考时刻2’在T+Δ和T+2×Δ之间。那么目标时刻为队列Q2。情况2,参考时刻2’恰好与某个队列对应的时刻相同。在此情况下,目标队列是参考时刻2’所在的时刻对应的队列。还以图5为例,假设参考时刻2’为T+2×Δ,那么目标队列为队列Q2。可选的,对于同一个流的数据包,α的值可以是相同的,对不同流的数据包,α的值可以是不同的。
该队列系统可以在网络设备232的下行板实现,也可以在网络设备232的上行板实现。网络设备232中实现该队列系统的单元可以称为队列系统单元。
图6示出了入口边缘设备231和网络设备232处理该报文的时序图。图6是在下行板实现该队列系统的时序图。
图6所示,数据包在参考时刻1完成整形处理,进入输出队列。该数据包在发送时刻1从入口边缘设备231中输出。在图6中。该数据包在接收时刻2输入到网络设备232。该数据包在时刻2离开网络设备232的交换结构进入到队列系统。网络设备232根据参考时刻2从队列系统包括的多个队列中选择目标队列。该数据包在发送时刻2从网络设备 232输出。
可以理解的是,图6以及后续附图中所示的出队列单元Q和队列系统单元D仅是从逻辑上划分的不同单元。具体设备形态上二者可以是相同的物理单元。
在图6中,参考时刻1表示为E 1,发送时刻1表示为t 1 out,接收时刻2表示为t 2 in,时刻2表示为t’ 2 in。参考时刻2表示为E 2,发送时刻2表示为t 2 out
理论时间上限1就是从时刻E 1开始到时刻E 2为止,数据包在入口边缘设备231以及网络设备232经历的理论时间上限。理论时间上限1不包括该数据包从入口边缘设备231到网络设备232之间的传输时延。
类似的,实际时间1是从时刻E 1开始到时刻t’ 2 in为止,数据包在入口边缘设备231以及网络设备232经历的实际时间。实际时间1不包括该数据包从入口边缘设备231到网络设备232之间的传输时延。
图7示出了入口边缘设备231和网络设备232处理该报文的时序图。图7是在上行板实现该队列系统的时序图。
图7所示,数据包在参考时刻1完成整形处理,进入输出队列。该数据包在发送时刻1从入口边缘设备231中输出。该数据包在接收时刻2输入到网络设备232。该数据包在时刻2进入到队列系统。网络设备232根据参考时刻2从队列系统包括的多个队列中选择目标队列。该数据包在时刻发送时刻2从网络设备232输出。
在图7中,参考时刻1表示为E 1,发送时刻1表示为t 1 out,接收时刻2表示为t 2 in,时刻2表示为t’ 2 in。参考时刻2表示为E 2,发送时刻2表示为t 2 out
与图6所示的下行板实现队列系统的方案类似,理论时间上限1就是从时刻E 1开始到时刻E 2为止,数据包在入口边缘设备231以及网络设备232经历的理论时间上限。理论时间上限1不包括该数据包从入口边缘设备231到网络设备232之间的传输时延。
类似的,实际时间1是从时刻E 1开始到时刻t’ 2 in为止,数据包在入口边缘设备231以及网络设备232经历的实际时间。实际时间1不包括该数据包从入口边缘设备231到网络设备232之间的传输时延。
为了便于描述,以下使用D 1 max表示关联于入口边缘设备231的理论时间上限,使用D 1 r表示关联于入口边缘设备231的实际时间。关联于入口边缘设备231的实际时间可以通过以下公式表示:
D 1 r=(t 1 out-E 1)+(t’ 2 in-t 2 in),(公式4.2)。
关联于入口边缘设备的剩余处理时间可以用D’ 1 res表示。D’ 1 res和D 1 max、D 1 r满足以下关系:
D’ 1 res=D 1 max-D 1 r,公式(4.3)
如果将公式4.2带入公式4.3可以得到以下公式:
D’ 1 res=D 1 max-(t 1 out-E 1)-(t’ 2 in-t 2 in),(公式4.4)。
由于入口边缘设备231和网络设备232是数据包进入网络后的前两个网络设备,因此,关联于入口边缘设备231的剩余处理时间等于剩余处理时间1。剩余处理时间1时间可以记为D 1 res。D’ 1 res=D 1 res
如上所述,时间信息1用于指示剩余处理时间1。网络设备232可以自行获知t’ 2 in和t 2 in。因此,时间信息1可以通过指示t 1 out和E 1的差来指示理论时间上限1与实际时间1 的差。
在一些实施例中,时间信息1中可以包括第一时间指示信息。该第一时间指示信息用于指示发送端设备的参考时刻到该数据包从该发送端设备输出的时刻。后续网络设备也会发送包括第一时间指示信息的时间信息。
为了区分后续网络设备发送的第一时间指示信息,以下将时间信息1中的第一时间指示信息称为第一时间指示信息1。
对于第一时间指示信息1而言,发送端设备是入口边缘设备231。因此,第一时间指示信息1用于指示E 1到t 1 out的时间。
在一些实施例中,第一时间指示信息1包括时刻t 1 out的值和时刻E 1的值。网络设备232在接收到第一时间指示信息1之后,可以根据时刻t 1 out的值和时刻E 1的值自行计算出t 1 out和E 1的差。
在另一些实施例中,第一时间指示信息1可以直接携带t 1 out和E 1的差。换句话说,入口边缘设备231可以计算出t 1 out和E 1的差,并将t 1 out和E 1的差通过第一时间指示信息1发送给网络设备232。
在一些实施例中,时间信息1还可以包括第二时间指示信息。第二时间指示信息用于指示累计剩余处理时间。如果发送端设备是网络中的第n个网络设备,那么累计剩余处理时间为剩余处理时间n-1。换句话说,累计剩余处理时间是理论时间上限n-1与实际时间n-1的差。理论时间上限n-1是从参考时刻1开始到参考时刻n为止的数据包经过的网络设备的理论时间上限。实际时间是参考时刻1开始到时刻n为止数据包经过的网络设备的实际时间。换句话说,该理论处理时间上限n-1是从参考时刻1开始到发送端设备的参考时刻为止数据包经过的网络设备的理论处理时间上限,该实际时间n-1是从参考时刻1开始到数据包进入发送端设备的队列系统的时刻。
类似的,为了区分后续网络设备发送的第二时间指示信息,以下将时间信息1中的第二时间指示信息称为第二时间指示信息1。
对于第二时间指示信息1而言,发送端设备是入口边缘设备231,即第1个网络设备。因此,n=1,累计剩余处理时间为剩余处理时间0。用于计算剩余处理时间0的理论处理时间上限0的起始时刻和结束时刻相同,都是参考时刻1;该实际时间0的起始时刻是参考时刻1,结束时刻也是参考时刻1(入口边缘设备231中的入队参考时刻与进入队列系统的时刻相同)。因此第二时间指示信息1指示的值为0。剩余处理时间0可以用D 0 res表示。
在一些实施例中,如果第二时间指示信息所指示的值为0,那么第二时间指示信息可以直接携带指示的值,即0。
在另一些实施例中,如果第二时间指示信息所指示的值为0,那么时间信息中也可以不携带第二时间指示信息或者第二时间指示信息的值是一个预设的固定值。如果网络设备接收到的时间信息中不包括第二时间指示信息或者第二时间指示信息的值是一个预设的固定值,那么该网络设备可以确定到发送端设备的参考时刻为止,剩余处理时间的值为0。
在一些实施例中,时间信息1中还可以包括第三时间指示信息。第三时间指示信息用于指示关联于发送端设备的理论时间上限。
类似的,为了区分后续网络设备发送的第三时间指示信息,以下将时间信息1中的第 三时间指示信息称为第三时间指示信息1。
对于第三时间指示信息1而言,发送端设备是入口边缘设备231,接收端设备是网络设备232。因此,第三时间指示信息1用于指示从参考时刻1开始到参考时刻2为止,数据包经过的网络设备的理论处理时间上限,即D 1 max
在另一些实施例中,D 1 max可以是预先配置在网络设备232中的或者是一个预设的默认值。在此情况下,时间信息1中可以不需要包括用于指示D 1 max的第三时间指示信息1。
403,网络设备232根据时刻t 2 in和时间信息1,从多个队列中确定目标队列并将该数据包加入该目标队列。
如上所述,网络设备232是根据参考时刻2,即E 2,从该多个队列中选择该目标队列的。具体选取方式请见以上描述,为了简洁,在此就不再赘述。下面重点介绍如何计算E 2
E 2是t’ 2 in与剩余处理时间1之和。剩余处理时间1为时间信息1所指示的剩余处理时间0与关联于入口边缘设备231的剩余处理时间之和,即满足以下公式
D 1 res=D 0 res+D’ 1 res,(公式4.5)。
由于D 0 res等于0,因此D 1 res=D’ 1 res
在此情况下,E 2、t’ 2 in和D 1 res满足以下公式:
E 2=t’ 2 in+D’ 1 res,(公式4.6)。
将用于计算D’ 1 res的公式4.4和公式4.6结合,可以得到以下公式:
E 2=D 1 max-(t 1 out-E 1)+t 2 in,(公式4.7)。
由此可见,网络设备232可以根据时刻t 2 in和接收到的时间信息1确定出E 2
404,网络设备232将该数据包发送至下一跳的网络设备(即网络设备233),网络设备232还将时间信息发送至网络设备233。
网络设备232在将该数据包加入目标队列后,可以根据调度规则对该多个队列进行调度。
可选的,在一些实施例中,网络设备232采用的调度规则是轮询调度。换句话说,多个队列时间是轮流调度的。一个队列被轮询到时,在排空之前不允许调度其他队列。为了实现上述特性,可以设置一个极大的量值(Quantum)。Quantum设置的很大可以保证一个队列被轮询到时,在排空之前不允许调度其他队列。
具体地,网络设备232环形的方式轮询该多个队列。如果轮询的队列不为空,则将调度该队列直至该队列排空;如果该队列为空,则直接跳过该队列。如果下行板实现,那么调度出的数据包输入到输出队列,由出队列单元负责处理,后续处理方式与现有的处理数据包的方式相同,为了简洁,在此就不再赘述。如果是上行板实现,那么调度出的数据包会进入交换系统。进入交换系统以及后续处理流程与现有处理数据包的方式相同,为了简洁在此就不再赘述。
网络设备232向网络设备233发送的时间信息可以称为时间信息2。时间信息2不同与时间信息1。
时间信息2用于指示剩余处理时间2。剩余处理时间2是理论时间上限2与实际时间2的差。
理论时间上限2的起始时刻为参考时刻1,理论时间上限2的结束时刻为参考时刻3。 参考时刻3可以用E 3表示。
以参考时刻2为起始时刻,以参考时刻3为结束时刻联于网络设备231的理论时间上限。
如果用D 2 max表示关联于网络设备232的理论时间上限,用D 1 max表示关联于入口边缘设备231的理论时间上限,那么理论时间上限2为D 1 max与D 2 max之和。
实际时间2的起始时刻为参考时刻1,实际时间2的结束时刻为时刻3,可以用t’ 3 in表示。
与时间信息1指示剩余处理时间1类似,网络设备233可以自行获知t’ 3 in和t 3 in。t 3 in表示接收时刻3。因此,时间信息2可以通过指示t 2 out和E 2以及剩余处理时间1来指示剩余处理时间2,其中t 2 out表示发送时刻2。
在一些实施例中,时间信息2中可以包括第一时间指示信息。时间信息2中包括的第一时间指示信息可以称为第一时间指示信息2。
对于第一时间指示信息2而言,发送端设备是网络设备232,因此,第一时间指示信息2用于指示E 2到t 2 out。与第一时间指示信息1类似,第二时间指示信息2可以携带时刻t 2 out的值和时刻E 2的值,也可以携带t 2 out和E 2的差。
在一些实施例中,时间信息2中可以包括第二时间指示信息。时间信息2中包括的第二时间指示信息可以称为第二时间指示信息2。
对于第二时间指示信息2而言,发送端设备是网络设备232。因此,第二时间指示信息2所指示的剩余处理时间所对应的理论处理时间上限的起始时刻是参考时刻1,结束时刻是参考时刻2;第二时间指示信息2指示的剩余处理时间对应的实际时间的起始时刻是参考时刻1,结束时刻是时刻2。可以看出,第二时间指示信息2指示的累计剩余处理时间是剩余处理时间1,即D 1 res
在一些实施例中,时间信息2中可以包括第三时间指示信息。时间信息2中包括的第三时间指示信息可以称为第三时间指示信息2。
对于第三时间指示信息2而言,发送端设备是网络设备232,接收端设备是网络设备233。因此,第三时间指示信息2用于指示从参考时刻2开始到参考时刻3为止,数据包经过的网络设备的理论处理时间上限,即关联于网络设备232的理论时间上限,即D 2 max
类似的,如果D 2 max是一个预先配置的值或者是一个预设值,那么时间信息2中也可以不需要携带第三时间指示信息2。
405,网络设备233根据时刻t 3 in和时间信息2,从多个队列中确定目标队列并将该数据包加入该目标队列。
网络设备233确定目标队列的方式和网络设备232确定目标队列的方式类似。
参考时刻3是时刻t’ 3 in与剩余处理时间2之和,即
E 3=t’ 3 in+D 2 res,(公式4.8)。
D 2 res可以根据以下公式确定:
D 2 res=D 1 res+D’ 2 res,(公式4.9)。
D’ 2 res可以根据以下公式确定:
D’ 2 res=D 2 max-(t 2 out-E 2)-(t’ 3 in-t 3 in),(公式4.10)。
结合公式4.9和公式4.10,可以得到
D 2 res=D 1 res+[D 2 max-(t 2 out-E 2)-(t’ 3 in-t 3 in)],(公式4.11)。
结合公式4.8和公式4.11,可以得到:
E 3=D 1 res+[D 2 max-(t 2 out-E 2)+t 3 in)],(公式4.12)。
由此可见,网络设备233根据时刻t 3 in和时间信息2确定E 3。在确定了参考时刻3之后,网络设备233可以根据参考时刻3确定目标队列,并将该数据包加入确定的目标队列。对目标队列的调度方式与上述实施例相同,为了简洁,在此就不再赘述。
图8示出了网络设备232和网络设备233处理该报文的时序图。图8是在下行板实现该队列系统的时序图。
图9示出了网络设备232和网络设备233处理该报文的时序图。图9是在上行板实现该队列系统的时序图。
406,网络设备233可以将接收到的数据包送至下一跳的网络设备(即网络设备234)。网络设备233还可以将时间信息发送至网络设备234。
407,网络设备234根据接收时刻4和接收到的时间信息,从多个队列中确定目标队列并将该数据包加入该目标队列。
步骤406与步骤404的具体实现方式类似,步骤407与步骤405的具体实现方式类似,为了简洁,在此就不再赘述。
408,网络设备234将该数据包发送至出口边缘设备235,该出口边缘设备235将该数据包通过边缘网络220将该数据包发送至用户设备221。网络设备234还可以将时间信息发送至出口边缘设备235。出口边缘设备235可以根据接收时刻5和接收到的时间信息从多个队列中确定目标队列并将数据包加入该目标队列。
步骤408的具体实现方式和步骤404、步骤405类似,为了简洁,在此不再赘述。
图10是在下行板上实现循环队列时核心网络230中各个网络设备对应的D max的示意图。
图11是在上行板上实现循环队列时核心网络230中各个网络设备对应的D max的示意图。
图10和图11中的D 0 max表示整形时延和首跳上的处理时延的理论实时间上限。如果将E 1设置在入口边缘设备231接收到数据包的时刻。那么D 0 max包含在D 1 max内。D 5 max表示E5到t 5 out(即数据包从出口边缘设备235输出的时刻)的理论时间上限。
数据包从入口边缘设备进入网络到从出口边缘设备离开网络可能会遇到以下两种情况:
情况1:每跳均严格等待。如果某个数据包在每一跳均遇到了最坏的情况,那么每一跳都会经历该跳上对应的D max。在此情况下,端到端(即从入口边缘设备231接收到数据包到该数据包从出口边缘设备235发出为止)的总时延上限为图10或11中所有D max之和。
情况2:如果数据包在某跳设备上没有经历最坏情况,即实际经历的时延小于该跳设备对应的D max,那么该数据包的剩余时间将会被记录,作为在后续跳上的入队标准。如果在某个设备上出现了拥塞,提前发送的数据包剩余时间D res相比经历严格等待的数据包更大,因此提前发送的数据包加入的队列会更晚被调度;当提前发送的数据包等待足够时间后变为才会被调度出去,此时相当于变成了逐跳严格等待后才发送,因此也必然能在D max 内被发送出去。
以网络设备233为例,假设网络设备233接收到数据包1和数据包2,数据包1是严格等待的报文。那么数据包1的D 2 res的值为0。数据包2是提前发送的报文,因此数据包2的D 2 res大于0。网络设备232可以直接根据接收到数据包1的时间为数据包1选择目标队列,但是需要根据接收到数据包2的时间和数据包2的D 2 res之和为数据包2选择目标队列。参考图5,有可能出现以下情况:数据包1的目标队列为队列Q 1,而数据包2的目标队列为队列Q 2。在此情况下,数据包2的优先级要低于数据包1的优先级。所以数据包2要晚于数据包1被调度。
综上,本申请的技术方案可以保证任意流的端到端时延上界不超过该流路径上所有出接口的D max之和。
图12是根据本申请实施例提供是一种调度数据包的方法的示意性流程图。
1201,第一网络设备在第一时刻接收来自于网络中的第二网络设备的数据包。
1202,该第一网络设备根据该数据包携带的时间信息和该第一时刻,确定第一参考时刻,该第一参考时刻为该数据包进入该第一网络设备的第一队列系统中的队列的参考时刻。
1203,该第一网络设备根据该第一参考时刻从该第一队列系统包括的多个队列中确定目标队列并将该数据包加入该目标队列,其中该时间信息用于指示第一剩余处理时间,该第一剩余处理时间为N个网络设备处理该数据包的第一理论时间上限和第一实际时间的差,该N个网络设备包括该数据包进入该网络后到该第一网络设备之前经过的网络设备,N为大于或等于1的正整数,该第一理论时间上限为从初始参考时刻开始至该第一参考时刻为止的该数据包在网络设备内部经历的理论时间上限,该第一实际时间为从该初始参考时刻开始至第二时刻为止该数据包在网络设备内部经历的实际时间,该初始参考时刻为该数据包进入该N个网络设备中的第一个网络设备的队列系统的参考时刻,该第二时刻为该数据包进入该第一队列系统的时刻。
1204,该第一网络设备根据该多个队列的调度规则,对该目标队列进行处理。
例如,该第一网络设备可以是上述实施例中的网络设备232。在此情况下,该第二网络设备是入口边缘设备231。又如,该第一网络设备是上述实施例中的网络设备233。在此情况下,第二网络设备是网络设备232。又如,该第一网络设备是上述实施例中的网络设备234。在此情况下,第二网络设备是网络设备233。
假设第一网络设备是网络设备232,那么第一时刻为t 2 in,该第二时刻为t’ 2 in,该第一参考时刻为E 2,该初始参考时刻为E 1,该第一剩余处理时间为D 1 res
在一些实施例中,该时间信息包括第一时间指示信息,该第一时间指示信息用于指示从第二参考时刻到第二输出时刻的时间,该第二参考时刻是该数据包在该第二网络设备的参考时刻,该第二输出时刻是该数据包从该第二网络设备输出的时刻。
还假设第一网络设备是网络设备232,第二参考时刻为E 1,第二输出时刻为t 1 out
在一些实施例中,在N为大于或等于2的正整数的情况下,该时间信息还包括第二时间指示信息,该第二时间指示信息用于指示第二剩余处理时间,该第二剩余处理时间为第二理论时间上限与第二实际时间的差,该第二理论时间上限为从该初始参考时刻开始至该第二参考时刻为止该数据包在网络设备内部经历的理论时间上限,该第二实际时间为从该 初始参考时刻开始至该数据包进入该第二网络设备的队列系统的时刻为止的该数据包在网络设备内部经历的实际时间。
还假设第一网络设备是网络设备232,第二剩余处理时间为D 0 res
在一些实施例中,该时间信息还包括第三时间指示信息,该第三时间指示信息用于指示关联于该第二网络设备的第三理论时间上限,该第三理论时间上限是从该第二参考时刻开始至该第一参考时刻为止该数据包在网络设备内部经历的理论时间上限。
还假设第一网络设备是网络设备232,第三理论时间上限为D 1 max
在一些实施例中,该多个队列与多个预设时刻一一对应,该第一网络设备根据该第一参考时刻从第一队列系统包括的多个队列中确定目标队列,包括:该第一网络设备根据该第一参考时刻,确定目标时刻对应的队列为该目标队列,其中该第一参考时刻不大于该目标时刻,且该第一参考时刻与该目标时刻之间不包括该多个预设时刻中的任一个时刻,该目标时刻是该多个预设时刻中的一个时刻。
在一些实施例中,该第一网络设备根据该时间信息和该第一时刻,确定第一参考时刻,包括:该第一网络设备根据以下公式确定该第一参考时刻:
E h+1=D h res+[(D h max)-(t h out-E h)]+t h+1 in,(公式12.1)
E h+1表示该第一参考时刻,D h res表示第二剩余处理时间,D h max表示关联于该第二网络设备的第三理论处理时间上限,t h out表示第二输出时刻,E h表示第二参考时刻,t h+1 in表示该第一时刻。
在一些实施例中,该第一网络设备根据该时间信息和该第一时刻,确定第一参考时刻,包括:该第一网络设备根据该第二时刻、该时间信息和该第一时刻,确定该第一参考时刻。
在一些实施例中,该第一网络设备根据第二时刻、该时间信息和该第一时刻,确定该第一参考时刻,包括:该第一网络设备根据以下公式确定第三剩余处理时间:
D h+1 res=D h res+[D h max-(t h out-E h)-(t’ h+1 in-t h+1 in)],(公式12.2)
D h+1表示该第三剩余处理时间,D h res表示第二剩余处理时间,D h max表示关联于该第二网络设备的第三理论处理时间上限,t h out表示第二输出时刻,E h表示第二参考时刻,t h+1 in表示该第一时刻,t’ h+1 in表示该第二时刻;该第一网络设备确定该第三剩余处理时间与该第二时刻的和为该第一参考时刻。
图13是根据本申请实施例提供的一种网络设备的示意性结构框图。如图13所示的网络设备1300包括接收单元13011301,处理单元13021302。
接收单元1301,用于在第一时刻接收来自于网络中的第二网络设备的数据包。
处理单元1302,用于根据该数据包携带的时间信息和该第一时刻,确定第一参考时刻,该第一参考时刻为该数据包进入该网络设备的第一队列系统中的队列的参考时刻。
处理单元1302,还用于根据该第一参考时刻从该第一队列系统包括的多个队列中确定目标队列并将该数据包加入该目标队列,其中该时间信息用于指示第一剩余处理时间,该第一剩余处理时间为N个网络设备处理该数据包的第一理论时间上限和第一实际时间的差,该N个网络设备包括该数据包进入该网络后到该网络设备之前经过的网络设备,N为大于或等于1的正整数,该第一理论时间上限为从初始参考时刻开始至该第一参考时刻为止的该数据包在网络设备内部经历的理论时间上限,该第一实际时间为从该初始参考时刻开始至第二时刻为止该数据包在网络设备内部经历的实际时间,该初始参考时刻为该数 据包进入该N个网络设备中的第一个网络设备的队列系统的参考时刻,该第二时刻为该数据包进入该第一队列系统的时刻。
处理单元1302,还用于根据该多个队列的调度规则,对该目标队列进行处理。
网络设备1300还可包括一个发送单元1303。发送单元1303可以根据处理单元1302的调度结果,将数据包发送至下一设备。
接收单元1301和发送单元1303可以由接收器实现。处理单元1302可以由处理器实现。接收单元1301、处理单元1302和发送单元1303的具体功能和有益效果可以参见上述实施例,为了简洁在此就不再赘述。
本申请实施例还提供了一种处理装置,包括处理器和接口。所述处理器可用于执行上述方法实施例中的方法。
应理解,上述处理装置可以是一个芯片。例如,该处理装置可以是现场可编程门阵列(field programmable gate array,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system on chip,SoC),还可以是中央处理器(central processor unit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)、其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,或其他集成芯片。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令或程序代码完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应注意,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令或程序代码完成。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机 存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
根据本申请实施例提供的方法,本申请还提供一种计算机程序产品,该计算机程序产品包括:计算机程序代码,当该计算机程序代码在计算机上运行时,使得该计算机执行上述任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种计算机可读介质,该计算机可读介质存储有程序代码,当该程序代码在计算机上运行时,使得该计算机执行上述任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种网络系统,其包括前述的多个网络设备。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟 悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种调度数据包的方法,其特征在于,包括:
    第一网络设备在第一时刻接收来自于网络中的第二网络设备的数据包;
    所述第一网络设备根据所述数据包携带的时间信息和所述第一时刻,确定第一参考时刻,所述第一参考时刻为所述数据包进入所述第一网络设备的第一队列系统中的队列的参考时刻;
    所述第一网络设备根据所述第一参考时刻从所述第一队列系统包括的多个队列中确定目标队列并将所述数据包加入所述目标队列,其中所述时间信息用于指示第一剩余处理时间,所述第一剩余处理时间为N个网络设备处理所述数据包的第一理论时间上限和第一实际时间的差,所述N个网络设备包括所述数据包进入所述网络后到所述第一网络设备之前经过的网络设备,N为大于或等于1的正整数,所述第一理论时间上限为从初始参考时刻开始至所述第一参考时刻为止的所述数据包在网络设备内部经历的理论时间上限,所述第一实际时间为从所述初始参考时刻开始至第二时刻为止所述数据包在网络设备内部经历的实际时间,所述初始参考时刻为所述数据包进入所述N个网络设备中的第一个网络设备的队列系统的参考时刻,所述第二时刻为所述数据包进入所述第一队列系统的时刻;
    所述第一网络设备根据所述多个队列的调度规则,对所述目标队列进行处理。
  2. 如权利要求1所述的方法,其特征在于,所述时间信息包括第一时间指示信息,所述第一时间指示信息用于指示从第二参考时刻到第二输出时刻的时间,所述第二参考时刻是所述数据包在所述第二网络设备的参考时刻,所述第二输出时刻是所述数据包从所述第二网络设备输出的时刻。
  3. 如权利要求2所述的方法,其特征在于,在N为大于或等于2的正整数的情况下,所述时间信息还包括第二时间指示信息,所述第二时间指示信息用于指示第二剩余处理时间,所述第二剩余处理时间为第二理论时间上限与第二实际时间的差,所述第二理论时间上限为从所述初始参考时刻开始至所述第二参考时刻为止所述数据包在网络设备内部经历的理论时间上限,所述第二实际时间为从所述初始参考时刻开始至所述数据包进入所述第二网络设备的队列系统的时刻为止的所述数据包在网络设备内部经历的实际时间。
  4. 如权利要求2或3所述的方法,其特征在于,所述时间信息还包括第三时间指示信息,所述第三时间指示信息用于指示关联于所述第二网络设备的第三理论时间上限,所述第三理论时间上限是从所述第二参考时刻开始至所述第一参考时刻为止所述数据包在网络设备内部经历的理论时间上限。
  5. 如权利要求1至4中任一项所述的方法,其特征在于,所述多个队列与多个预设时刻一一对应,所述第一网络设备根据所述第一参考时刻从第一队列系统包括的多个队列中确定目标队列,包括:
    所述第一网络设备根据所述第一参考时刻,确定目标时刻对应的队列为所述目标队列,其中所述第一参考时刻不大于所述目标时刻,且所述第一参考时刻与所述目标时刻之间不包括所述多个预设时刻中的任一个时刻,所述目标时刻是所述多个预设时刻中的一个时刻。
  6. 如权利要求1至5中任一项所述的方法,其特征在于,所述第一网络设备根据所述时间信息和所述第一时刻,确定第一参考时刻,包括:所述第一网络设备根据以下公式确定所述第一参考时刻:
    E h+1=D h res+[(D h max)-(t h out-E h)]+t h+1 in
    E h+1表示所述第一参考时刻,D h res表示第二剩余处理时间,D h max表示关联于所述第二网络设备的第三理论处理时间上限,t h out表示第二输出时刻,E h表示第二参考时刻,t h+1 in表示所述第一时刻。
  7. 如权利要求5所述的方法,其特征在于,所述第一网络设备根据所述时间信息和所述第一时刻,确定第一参考时刻,包括:
    所述第一网络设备根据所述第二时刻、所述时间信息和所述第一时刻,确定所述第一参考时刻。
  8. 如权利要求7所述的方法,其特征在于,所述第一网络设备根据第二时刻、所述时间信息和所述第一时刻,确定所述第一参考时刻,包括:
    所述第一网络设备根据以下公式确定第三剩余处理时间:
    D h+1 res=D h res+[D h max-(t h out-E h)-(t’ h+1 in-t h+1 in)],
    D h+1表示所述第三剩余处理时间,D h res表示第二剩余处理时间,D h max表示关联于所述第二网络设备的第三理论处理时间上限,t h out表示第二输出时刻,E h表示第二参考时刻,t h+1 in表示所述第一时刻,t’ h+1 in表示所述第二时刻;
    所述第一网络设备确定所述第三剩余处理时间与所述第二时刻的和为所述第一参考时刻。
  9. 一种网络设备,其特征在于,所述网络设备包括:
    接收单元,用于在第一时刻接收来自于网络中的第二网络设备的数据包;
    处理单元,用于根据所述数据包携带的时间信息和所述第一时刻,确定第一参考时刻,所述第一参考时刻为所述数据包进入所述网络设备的第一队列系统中的队列的参考时刻;
    所述处理单元,还用于根据所述第一参考时刻从所述第一队列系统包括的多个队列中确定目标队列并将所述数据包加入所述目标队列,其中所述时间信息用于指示第一剩余处理时间,所述第一剩余处理时间为N个网络设备处理所述数据包的第一理论时间上限和第一实际时间的差,所述N个网络设备包括所述数据包进入所述网络后到所述网络设备之前经过的网络设备,N为大于或等于1的正整数,所述第一理论时间上限为从初始参考时刻开始至所述第一参考时刻为止的所述数据包在网络设备内部经历的理论时间上限,所述第一实际时间为从所述初始参考时刻开始至第二时刻为止所述数据包在网络设备内部经历的实际时间,所述初始参考时刻为所述数据包进入所述N个网络设备中的第一个网络设备的队列系统的参考时刻,所述第二时刻为所述数据包进入所述第一队列系统的时刻;
    所述处理单元,还用于根据所述多个队列的调度规则,对所述目标队列进行处理。
  10. 如权利要求9所述的网络设备,其特征在于,所述时间信息包括第一时间指示信息,所述第一时间指示信息用于指示从第二参考时刻到第二输出时刻的时间,所述第二参考时刻是所述数据包在所述第二网络设备的参考时刻,所述第二输出时刻是所述数据包从所述第二网络设备输出的时刻。
  11. 如权利要求10所述的网络设备,其特征在于,在N为大于或等于2的正整数的 情况下,所述时间信息还包括第二时间指示信息,所述第二时间指示信息用于指示第二剩余处理时间,所述第二剩余处理时间为第二理论时间上限与第二实际时间的差,所述第二理论时间上限为从所述初始参考时刻开始至所述第二参考时刻为止所述数据包在网络设备内部经历的理论时间上限,所述第二实际时间为从所述初始参考时刻开始至所述数据包进入所述第二网络设备的队列系统的时刻为止的所述数据包在网络设备内部经历的实际时间。
  12. 如权利要求10或11所述的网络设备,其特征在于,所述时间信息还包括第三时间指示信息,所述第三时间指示信息用于指示关联于所述第二网络设备的第三理论时间上限,所述第三理论时间上限是从所述第二参考时刻开始至所述第一参考时刻为止所述数据包在网络设备内部经历的理论时间上限。
  13. 如权利要求9至12中任一项所述的网络设备,其特征在于,所述多个队列与多个预设时刻一一对应,所述处理单元,具体用于根据所述第一参考时刻,确定目标时刻对应的队列为所述目标队列,其中所述第一参考时刻不大于所述目标时刻,且所述第一参考时刻与所述目标时刻之间不包括所述多个预设时刻中的任一个时刻,所述目标时刻是所述多个预设时刻中的一个时刻。
  14. 如权利要求9至13中任一项所述的网络设备,其特征在于,所述处理单元,具体用于根据以下公式确定所述第一参考时刻:
    E h+1=D h res+[(D h max)-(t h out-E h)]+t h+1 in
    E h+1表示所述第一参考时刻,D h res表示第二剩余处理时间,D h max表示关联于所述第二网络设备的第三理论处理时间上限,t h out表示第二输出时刻,E h表示第二参考时刻,t h+1 in表示所述第一时刻。
  15. 如权利要求13所述的网络设备,其特征在于,所述处理单元,具体用于根据所述第二时刻、所述时间信息和所述第一时刻,确定所述第一参考时刻。
  16. 如权利要求15所述的网络设备,其特征在于,所述处理单元,具体用于根据以下公式确定第三剩余处理时间:
    D h+1 res=D h res+[D h max-(t h out-E h)-(t’ h+1 in-t h+1 in)],
    D h+1表示所述第三剩余处理时间,D h res表示第二剩余处理时间,D h max表示关联于所述第二网络设备的第三理论处理时间上限,t h out表示第二输出时刻,E h表示第二参考时刻,t h+1 in表示所述第一时刻,t’ h+1 in表示所述第二时刻;
    确定所述第三剩余处理时间与所述第二时刻的和为所述第一参考时刻。
  17. 一种网络设备,其特征在于,包括:处理器,用于执行存储器中存储的程序,当所述程序被执行时,使得所述网络设备执行如权利要求1至8任一项所述的方法。
  18. 如权利要求17所述的网络设备,其特征在于,所述存储器位于所述网络设备之外。
  19. 一种计算机可读存储介质,其特征在于,包括指令,当所述指令在计算机上运行时,如权利要求1至8任一项所述的方法被执行。
  20. 一种网络设备,其特征在于,所述网络设备包括处理器、存储器以及存储在所述存储器上并可在所述处理器上运行的指令,当所述指令被运行时,使得所述网络设备执行如权利要求1至8任一项所述的方法。
PCT/CN2021/104218 2020-07-31 2021-07-02 调度数据包的方法和相关装置 WO2022022224A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21850966.9A EP4181480A4 (en) 2020-07-31 2021-07-02 DATA PACKET SCHEDULING METHOD AND RELATED APPARATUS
US18/162,542 US20230179534A1 (en) 2020-07-31 2023-01-31 Data packet scheduling method and related apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010760185.1 2020-07-31
CN202010760185.1A CN114095453A (zh) 2020-07-31 2020-07-31 调度数据包的方法和相关装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/162,542 Continuation US20230179534A1 (en) 2020-07-31 2023-01-31 Data packet scheduling method and related apparatus

Publications (1)

Publication Number Publication Date
WO2022022224A1 true WO2022022224A1 (zh) 2022-02-03

Family

ID=80037488

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/104218 WO2022022224A1 (zh) 2020-07-31 2021-07-02 调度数据包的方法和相关装置

Country Status (4)

Country Link
US (1) US20230179534A1 (zh)
EP (1) EP4181480A4 (zh)
CN (1) CN114095453A (zh)
WO (1) WO2022022224A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002117A (zh) * 2022-05-30 2022-09-02 中移(杭州)信息技术有限公司 内容分发网络动态调度方法、系统、设备及存储介质
CN115484407A (zh) * 2022-08-25 2022-12-16 奥比中光科技集团股份有限公司 一种多路采集数据的同步输出方法、系统及rgbd相机

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117768410A (zh) * 2022-09-19 2024-03-26 中兴通讯股份有限公司 报文调度方法、电子设备和计算机可读存储介质
CN117955928A (zh) * 2022-10-20 2024-04-30 中兴通讯股份有限公司 报文调度方法、网络设备、存储介质及计算机程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104486250A (zh) * 2014-12-03 2015-04-01 中国航空工业集团公司第六三一研究所 一种面向Deadline满足时间确定性的调度方法
US20150222970A1 (en) * 2014-02-04 2015-08-06 Nec Laboratories America, Inc. Lossless and low-delay optical burst switching using soft reservations and opportunistic transmission
CN108282415A (zh) * 2017-12-29 2018-07-13 北京华为数字技术有限公司 一种调度方法及设备
CN111416779A (zh) * 2020-03-27 2020-07-14 西安电子科技大学 基于时限的互联网业务队列调度方法
CN111431822A (zh) * 2020-04-19 2020-07-17 汪勤思 一种确定性时延业务智能调度与控制实施方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7852763B2 (en) * 2009-05-08 2010-12-14 Bae Systems Information And Electronic Systems Integration Inc. System and method for determining a transmission order for packets at a node in a wireless communication network
US9124482B2 (en) * 2011-07-19 2015-09-01 Cisco Technology, Inc. Delay budget based forwarding in communication networks
EP3720069A4 (en) * 2017-12-31 2020-12-02 Huawei Technologies Co., Ltd. METHOD, DEVICE, AND SYSTEM FOR SENDING A MESSAGE
CN110086728B (zh) * 2018-01-26 2021-01-29 华为技术有限公司 发送报文的方法、第一网络设备及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150222970A1 (en) * 2014-02-04 2015-08-06 Nec Laboratories America, Inc. Lossless and low-delay optical burst switching using soft reservations and opportunistic transmission
CN104486250A (zh) * 2014-12-03 2015-04-01 中国航空工业集团公司第六三一研究所 一种面向Deadline满足时间确定性的调度方法
CN108282415A (zh) * 2017-12-29 2018-07-13 北京华为数字技术有限公司 一种调度方法及设备
CN111416779A (zh) * 2020-03-27 2020-07-14 西安电子科技大学 基于时限的互联网业务队列调度方法
CN111431822A (zh) * 2020-04-19 2020-07-17 汪勤思 一种确定性时延业务智能调度与控制实施方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIANMING ZHANG: "Bounds on End-to-End Delay Jitter with Self-Similar Input Traffic in Ad Hoc Wireless Network", COMPUTING, COMMUNICATION, CONTROL, AND MANAGEMENT, 2008. CCCM '08. ISECS INTERNATIONAL COLLOQUIUM ON, IEEE, PISCATAWAY, NJ, USA, 3 August 2008 (2008-08-03), Piscataway, NJ, USA , pages 538 - 541, XP031314230, ISBN: 978-0-7695-3290-5 *
See also references of EP4181480A4

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002117A (zh) * 2022-05-30 2022-09-02 中移(杭州)信息技术有限公司 内容分发网络动态调度方法、系统、设备及存储介质
CN115484407A (zh) * 2022-08-25 2022-12-16 奥比中光科技集团股份有限公司 一种多路采集数据的同步输出方法、系统及rgbd相机
CN115484407B (zh) * 2022-08-25 2023-07-04 奥比中光科技集团股份有限公司 一种多路采集数据的同步输出方法、系统及rgbd相机

Also Published As

Publication number Publication date
EP4181480A1 (en) 2023-05-17
CN114095453A (zh) 2022-02-25
EP4181480A4 (en) 2023-12-20
US20230179534A1 (en) 2023-06-08

Similar Documents

Publication Publication Date Title
WO2022022224A1 (zh) 调度数据包的方法和相关装置
WO2019184925A1 (zh) 一种报文发送的方法、网络节点和系统
WO2019214561A1 (zh) 一种报文发送的方法、网络节点和系统
JP4995101B2 (ja) 共有リソースへのアクセスを制御する方法及びシステム
JP7231749B2 (ja) パケットスケジューリング方法、スケジューラ、ネットワーク装置及びネットワークシステム
EP3032785B1 (en) Transport method in a communication network
Soni et al. Optimizing network calculus for switched ethernet network with deficit round robin
CN101212417B (zh) 一种基于时间粒度的互联网服务质量保证方法
CN115604193B (zh) 一种热轧控制系统中确定性资源调度方法及系统
Baldi et al. Time-driven priority router implementation: Analysis and experiments
WO2022134978A1 (zh) 数据发送的方法和装置
WO2022068617A1 (zh) 流量整形方法及装置
Chen et al. Credit-based low latency packet scheduling algorithm for real-time applications
WO2022105686A1 (zh) 报文处理方法以及相关装置
WO2022022222A1 (zh) 发送数据包的方法及网络设备
Zhang et al. Hard real-time communication over multi-hop switched ethernet
WO2024016327A1 (zh) 报文传输
Nisar et al. An efficient voice priority queue (VPQ) scheduler architectures and algorithm for VoIP over WLAN networks
Vila-Carbó et al. Analysis of switched Ethernet for real-time transmission
Şimşek et al. A new packet scheduling algorithm for real-time multimedia streaming
Yao et al. Burst-Aware Mixed Flow Scheduling in Time-Sensitive Networks for Power Business
Yin et al. Delay-jitter optimized starting potential-based fair queueing
Cobb Rate equalization: A new approach to fairness in deterministic quality of service
Shrivastava et al. Improving Efficiency of MANET by Reducing Queuing Delay Using Hybrid Algorithm.
Finn et al. Detnet problem statement

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021850966

Country of ref document: EP

Effective date: 20230207

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21850966

Country of ref document: EP

Kind code of ref document: A1