WO2023072112A1 - 一种报文调度的方法及装置 - Google Patents

一种报文调度的方法及装置 Download PDF

Info

Publication number
WO2023072112A1
WO2023072112A1 PCT/CN2022/127533 CN2022127533W WO2023072112A1 WO 2023072112 A1 WO2023072112 A1 WO 2023072112A1 CN 2022127533 W CN2022127533 W CN 2022127533W WO 2023072112 A1 WO2023072112 A1 WO 2023072112A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
message
token bucket
packet
path
Prior art date
Application number
PCT/CN2022/127533
Other languages
English (en)
French (fr)
Inventor
张永平
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22885989.8A priority Critical patent/EP4344155A4/en
Publication of WO2023072112A1 publication Critical patent/WO2023072112A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/36Flow control; Congestion control by determining packet size, e.g. maximum transfer unit [MTU]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6255Queue scheduling characterised by scheduling criteria for service slots or service orders queue load conditions, e.g. longest queue first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS

Definitions

  • the present application relates to the field of communication technologies, and in particular to a method and device for message scheduling.
  • QoS Quality of service
  • the packets received by the network device can be added to the queue, and the cached packets in the queue can be discharged from the queue according to a certain strategy.
  • the traffic burst can be shaped. Specifically, the rate at which tokens are injected into the token bucket can be limited to limit the rate at which messages are discharged from the queue. If the rate at which packets are added to the queue reaches or exceeds the rate at which tokens are injected into the token bucket, the packets in the queue can be deferred. In this way, burst shaping is realized.
  • the impact of bursts can be further reduced. Specifically, if the rate at which packets are added to the queue is slow, one token bucket can be used to schedule packets to be discharged from the queue; if the rate at which packets are added to the queue is fast, multiple token buckets can be used to schedule packets to be discharged from the queue. However, if multiple token buckets are used to schedule packets in the queue, there may be a problem of frequent token bucket switching.
  • Embodiments of the present application provide a method and device for packet scheduling, aiming at reducing the frequency of token bucket switching.
  • the embodiment of the present application provides a message scheduling method, which can be applied to the first device, and the first device can be a network device such as a router or a switch, or it can be other devices for scheduling messages.
  • the method includes the following steps: before scheduling the first message, the first device may first judge whether the number of remaining tokens in the first token bucket of the first device satisfies the first message discharge queue. If the number of remaining tokens in the first token bucket does not satisfy the first packet discharge queue, the first device may determine whether the length of packets buffered in the queue is smaller than the first threshold.
  • the first threshold is a threshold value for enabling the second token bucket by the first device.
  • the first device may enable the second token bucket to pair the second token.
  • a packet is scheduled.
  • the first device may issue the token of the second token bucket for the first message, and dequeue the first message. In this way, in addition to the status of the first token bucket, the first device can also determine whether to use the second token bucket to schedule the first message out of the queue according to the length of the message buffered in the queue.
  • the first token bucket and the second token bucket may correspond to different paths.
  • the first token bucket may correspond to the second path
  • the second token bucket may correspond to the first path.
  • the first path is a path with a link quality lower than that of the second path. Then, after the token of the second token bucket is issued for the first message and the first message is dequeued, the first device can send the first message through the first path. In this way, when the rate at which packets are added to the queue is low, the number of remaining tokens in the first token bucket is sufficient for the first packet to be discharged from the queue, and the first device can send the second packet through the second path with higher link quality. a message.
  • the first device When the rate at which packets are added to the queue is high, the number of remaining tokens in the first token bucket does not meet the requirements of the first packet discharge queue, and the length of the cached packets in the queue is not less than the first threshold, the first device The first packet may be sent through the first path with a lower link quality.
  • the network type of the access network where the first path is located is different from the network type of the access network where the second path is located. That is, the packet scheduling method provided in the embodiment of the present application can be applied to an application scenario of hybrid access.
  • the first token bucket and the second token bucket may correspond to different forwarding priorities.
  • the first token bucket may correspond to the first priority
  • the second token bucket may correspond to the second priority.
  • the forwarding priority of the first priority may be lower than the forwarding priority of the second priority.
  • the first device may send the first packet according to the first priority. In this way, when the rate at which packets are added to the queue is low, the number of remaining tokens in the first token bucket satisfies the requirement for the first packet to be discharged from the queue, and the first device can send the packet through the second priority with a higher forwarding priority. first message.
  • the first device When the rate at which packets are added to the queue is high, the number of remaining tokens in the first token bucket does not meet the requirements of the first packet discharge queue, and the length of the cached packets in the queue is not less than the first threshold, the first device The first message may be sent by forwarding the first priority with a lower priority.
  • the first device may keep the packet in the queue. Specifically, before scheduling the second message to be dequeued, the first device may determine whether the number of remaining tokens in the first token bucket satisfies the second message dequeue. If the number of remaining tokens in the first token bucket does not satisfy the second packet discharge queue, the first device may further determine whether the length of packets buffered in the queue is smaller than the first threshold.
  • the first device may schedule the second message discharge queue if the number of remaining tokens in the first token bucket satisfies the second message discharge queue, or the length of the buffered message in the queue is greater than or equal to the first threshold.
  • the first device can issue the tokens in the first token bucket for the second message, and send the second message Drain the queue. It can be understood that, regardless of the relationship between the length of the buffered message in the queue and the first threshold, if the remaining tokens in the first token bucket can satisfy the requirements of the second queue to be discharged from the queue, the first device can schedule according to the first token bucket The second message is queued. In this way, the first device can preferentially use the first token bucket to schedule packets to be dequeued, thereby improving the utilization rate of the processing method corresponding to the first token bucket.
  • the first device can be the second The message issues tokens in the second token bucket, and schedules the second message to be discharged from the queue.
  • the first device may send the second message according to the aforementioned second path. Or, if the second message is issued with the token of the first token bucket, the first device may send the second message according to the foregoing second priority.
  • the first device may determine whether the number of remaining tokens in the first token bucket satisfies the first packet discharge queue when the trigger condition is met.
  • the trigger condition may include any one or more of a change in the number of tokens in the first token bucket, an improvement in the number of tokens in the second token bucket, and the addition of the third message to the queue.
  • the first device includes a broadband access server (Broadband Remote Access Server, BRAS).
  • BRAS Broadband Remote Access Server
  • the embodiment of the present application provides a message scheduling device, the device is applied to the first device, including: a judging unit, used when the number of remaining tokens in the first token bucket does not meet the first The message is discharged from the queue, and it is determined whether the length of the message buffered in the queue is less than a first threshold; the scheduling unit is configured to respond to the length of the message buffered in the queue is not less than the first threshold, for the first The message issues tokens in the second token bucket, and the first message is discharged from the queue.
  • the device further includes a sending unit, configured to send the first message through a first path, the first path being the forwarding path corresponding to the second token bucket
  • the link quality of the first path is lower than the link quality of the second path
  • the second path is a forwarding path corresponding to the first token bucket.
  • the network type of the access network where the first path is located is different from the network type of the access network where the second path is located.
  • the device further includes a sending unit, configured to send the first message according to a first priority, the first priority is the second token bucket corresponding
  • the forwarding priority of the first priority is lower than the forwarding priority of the second priority, and the forwarding priority of the second priority is the forwarding priority corresponding to the first token bucket.
  • the judging unit is further configured to determine whether the buffered message length in the queue is less than The first threshold; the scheduling unit is further configured to respond to the length of the buffered message in the queue being less than the first threshold, and the number of remaining tokens in the first token bucket can satisfy the Keeping the second message in the queue before the second message is discharged from the queue.
  • the judging unit is further configured to determine that the number of remaining tokens in the first token bucket satisfies the queue for the second message; the scheduling unit is also configured to queue for the The second message is issued with the token of the first token bucket, and the second message is dequeued.
  • the device further includes a sending unit, configured to send the second message through a second path, and the second path is the forwarding path corresponding to the first token bucket. path.
  • the device further includes a sending unit, configured to send the second message according to a second priority, the second priority is the first token bucket corresponding forwarding priority.
  • the scheduling unit is further configured to, in response to the third message entering the queue, determine whether the number of remaining tokens in the first token bucket satisfies the first message discharge queue.
  • the embodiment of the present application provides a first device, the first device includes a processor and a memory, the memory is used to store instructions or program codes, and the processor is used to call and run the The instruction or program code is used to execute the packet scheduling method as described in the first aspect.
  • an embodiment of the present application provides a chip, including a memory and a processor, the memory is used to store instructions or program codes, and the processor is used to call and run the instructions or program codes from the memory to perform the above-mentioned first The packet scheduling method described in the aspect.
  • the embodiment of the present application provides a computer-readable storage medium, including instructions, programs or codes, which, when executed on a computer, enable the computer to execute the message scheduling as described in the aforementioned first aspect. method.
  • FIG. 1 is a schematic structural diagram of a system 100 provided in an embodiment of the present application.
  • FIG. 2 is a schematic flow diagram of a method for message scheduling provided by an embodiment of the present application
  • FIG. 3 is a flow chart of a method of a method for scheduling packets provided in an embodiment of the present application
  • FIG. 4 is a schematic structural diagram of an apparatus 400 for message scheduling provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a device 500 provided in an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a device 600 provided in an embodiment of the present application.
  • FIG. 1 this figure is a schematic structural diagram of a system 100 provided in an embodiment of the present application.
  • a device 111 a device 112 , a network device 121 , a network device 122 , a network device 123 , a network device 124 and a network device 125 are included.
  • the network device 121 is connected to the device 111, the network device 122 and the network device 123 respectively
  • the network device 124 is connected to the network device 123 and the network device 125 respectively
  • the device 112 is connected to the network device 122 and the network device 125 respectively.
  • the network device 121 is connected to the device 111 through the network interface A1, the network device 121 is connected to the network device 122 through the network interface A2, and the network device 121 is connected to the network device 123 through the network interface A3.
  • the devices can send messages to each other. For example, device 111 can send a message to device 112 through the path "network device 121 ⁇ network device 122", or send a message to device 112 through the path "network device 121 ⁇ network device 123 ⁇ network device 124 ⁇ network device 125" .
  • one or more token buckets can be configured in the network device 121 to determine whether the packets in the queue of the network device 121 can be discharged from the queue. If the number of remaining tokens in the token bucket is sufficient for the message to be dequeued, the message can be scheduled and dequeued from the queue, and the message is subsequently processed according to the token bucket corresponding to the dequeued message.
  • the network device 121 in the process of dispatching message X out of the queue, the network device 121 can first judge whether the number of remaining tokens in the first token bucket satisfies the requirement of the packet X. Text X is queued. If the number of remaining tokens in the first token bucket satisfies the queue for message X to be discharged, the network device 121 may issue tokens in the first token bucket for message X, and dequeue message X. If the number of remaining tokens in the first token bucket does not satisfy the message X discharge queue, the network device 121 may determine whether the remaining token number of the second token bucket satisfies the message X discharge queue. If the number of remaining tokens in the second token bucket satisfies the message X to be discharged from the queue, the network device 121 may issue tokens in the second token bucket for the message X, and send the message X to the queue.
  • the message X can be processed according to the token issued by the message X.
  • network device 121 may determine the forwarding path of the message according to the token of message X. If the message X is issued the token of the first token bucket, the network device 121 can send the message X through the path "network device 121 ⁇ network device 122"; if the message X is issued the token corresponding to the second token bucket Token, the network device 121 can send message X through the path "network device 121 ⁇ network device 123 ⁇ network device 124 ⁇ network device 125".
  • the network device 121 can forward the packets through two different paths. In this way, the impact of traffic bursts on the network system can be reduced. It can be understood that, even if the network device 121 does not forward the message through two different paths, the network device 121 can still compare the message issued with the token of the first token bucket and the token issued with the second token bucket Packets are processed differently to achieve the goal of traffic shaping.
  • the traditional method of using multiple token buckets for packet scheduling may have the problem of frequent token bucket switching, which may cause the corresponding processing method of the token bucket to be called more frequently, which will affect the network system. Specifically, if different token buckets correspond to different forwarding paths, frequent switching of token buckets may cause the actual utilization of the path to be lower than the attainable utilization of the path.
  • the network device 121 can dispatch packets out of the queue through the second token bucket when the rate at which the packets enter the queue is greater than the rate at which tokens are injected into the first token bucket;
  • the first token bucket is used to dispatch packets out of the queue.
  • the network device 121 will repeatedly switch between the first token bucket and the second token bucket when issuing tokens for the message. Switch between buckets multiple times. If the tokens in the first token bucket and the tokens in the second token bucket respectively correspond to different paths, the utilization rate of the path by the network device 121 may be lower than the actual utilization rate of the path.
  • the token is injected into the first token bucket of the network device 121 at a rate of V; between the t1 moment and the t2 moment, the message is added to the queue of the network device 121 at a rate of 0.5V; between the t2 moment and the t3 moment, the message Messages are added to the queue of the network device 121 at a rate of 1.5V; between time t3 and time t4, messages are added to the queue of the network device 121 at a rate of 0.5V.
  • network device 121 can forward messages through the path "network device 121 ⁇ network device 122", and the rate of forwarding messages corresponds to 0.5V; between time t2 and time t3 In between, the network device 121 can forward some messages through the path "network device 121 ⁇ network device 122", and forward the remaining part of the messages through the path "network device 121 ⁇ network device 123 ⁇ network device 124 ⁇ network device 125", And the rate of forwarding messages through the path "network device 121 ⁇ network device 122" corresponds to V, and the rate of forwarding messages through the path "network device 121 ⁇ network device 123 ⁇ network device 124 ⁇ network device 125" corresponds to 0.5V Correspondingly; between the time t3 and the time t4, the network device 121 can forward the message through the path "network device 121 ⁇ network device 122", and the rate of forwarding the message corresponds to 0.5V.
  • the rate at which network device 121 forwards packets through the path "network device 121 ⁇ network device 122" corresponds to 0.5V. From time t1 to time t4 During the process, the average rate at which network device 121 forwards packets through the path "network device 121 ⁇ network device 122" is lower than the rate corresponding to V.
  • the utilization rate of network device 121 on the path "network device 121 ⁇ network device 122" does not reach 100%, but network device 121 may forward packets through other paths, As a result, the actual utilization rate of the path "network device 121 ⁇ network device 122" by the network device 121 is lower than the maximum utilization rate, and the utilization rate of the network device 121 for the path "network device 121 ⁇ network device 122" is relatively low.
  • the embodiment of the present application provides a method and device for message scheduling, which aims to reduce the frequency of switching token buckets and improve the utilization of the processing methods corresponding to token buckets Rate.
  • the packet scheduling method provided in the embodiment of the present application can be applied to the system shown in FIG. 1 . Specifically, it may be executed by any one or more network devices in the network device 121 , the network device 122 , the network device 123 , the network device 124 and the network device 125 in the embodiment shown in FIG. 1 .
  • the network device may be a device with a forwarding function, such as a forwarding device such as a router (router) or a switch (switch), or may be a device with a forwarding function such as a server or a terminal device.
  • the packet scheduling method provided by the embodiment of the present application is executed by the network device 121
  • the method may be used, for example, to schedule packets received by the network device 121 through the network port A1.
  • the device 111 and the device 112 may be terminal devices, servers or other devices.
  • the packet scheduling method provided in the embodiment of the present application may also be executed by an access device.
  • the method may be executed by an access device with a BRAS function, and is used for scheduling messages received by the access device from the terminal device.
  • this figure is a schematic flow chart of a method for message scheduling provided in an embodiment of the present application, including:
  • the first device may be a network device or an access device in the network system.
  • the first device may be any one of network device 121 , network device 122 , network device 123 , network device 124 , and network device 125 .
  • the first device has at least two token buckets, the first token bucket and the second token bucket, and also has a queue for buffering packets. The token bucket and queue are introduced respectively below.
  • the first token bucket is a token bucket owned by the first device and can be used to store tokens.
  • the tokens stored in the first token bucket may be referred to as first tokens.
  • the first device may inject the first token into the first token bucket according to a first preset rate.
  • the first token bucket may have a first threshold, indicating the maximum number of tokens that the first token bucket can hold. After the number of remaining tokens in the first token bucket reaches the first threshold, the number of remaining tokens in the first token bucket does not continue to increase.
  • the second token bucket is also a token bucket of the first device for storing tokens.
  • the token stored in the second token bucket may be called the second token, and the maximum number of tokens that the second token bucket can accommodate may be called the second threshold.
  • the first device injects the token into the second token bucket
  • the rate of the second token may be referred to as a second preset rate. It can be understood that, assuming that different token buckets can correspond to different forwarding priorities, then the forwarding priority of the second token bucket may be lower than the forwarding priority of the first token bucket.
  • the first device may preferentially use the first token bucket to schedule packets to be discharged from the queue.
  • the first token bucket may be called a C token bucket
  • the second token bucket may be called a P token bucket
  • token bucket and “token” involved in the embodiments of this application may be virtual concepts, and do not represent entity buckets or tokens.
  • a variable of floating point number or integer type may be used to represent the number of remaining tokens in the first token bucket or the number of remaining tokens in the second token bucket.
  • the first threshold and the second threshold may be the same or different; the first preset rate and the second preset rate may be the same or different.
  • the queue for buffering packets in the first device may also be referred to as a buffer queue.
  • the first device may first add the packets to the cache queue. Messages stored in the cache queue can be discharged from the queue through the first token bucket or the second token bucket.
  • cache queues can have queue caps.
  • the first device does not add newly received packets to the buffer queue. For example, the first device may discard packets received after the amount of packets buffered in the buffer queue reaches the upper limit of the queue.
  • the queue upper limit may be determined by the storage space allocated for the cache queue in the first device, or may be divided by technicians according to actual application conditions.
  • the packet size may represent the number of packets or the total number of bytes of the packets.
  • the first device may add the message to the tail of the queue.
  • the first device may preferentially schedule the packets at the head of the queue. That is to say, the first device can schedule the packets according to the time when the packets are added to the queue, and the earlier the time when the packets are added to the queue, the earlier the time when the packets are dequeued.
  • the "first packet” mentioned later may be the first packet at the head of the queue in the queue.
  • the first device may judge whether the number of remaining tokens in the first token bucket satisfies the first packet discharge queue. Wherein, judging the trigger condition indicates that there is a possibility of dequeuing the first packet.
  • the judgment trigger condition may include an increase in the number of remaining tokens in the first token bucket, an increase in the number of remaining tokens in the second token bucket, and the network device adding a new message to the queue and the network device from the queue. Any one or more of the messages.
  • the fact that the number of remaining tokens in the first token bucket satisfies the first packet discharge queue may include that the number of remaining tokens in the first token bucket is greater than or equal to the number of bytes in the first packet.
  • the first device may perform a corresponding operation according to the judgment result, and two possible implementation manners corresponding to the two judgment results are respectively introduced below.
  • the first device determines that the number of remaining tokens in the first token bucket cannot satisfy the condition for dequeuing the first packet. Then the first device may judge whether the length of the packets buffered in the queue is smaller than the first threshold, and schedule the first packets according to the judgment result. Wherein, the number of remaining tokens in the first token bucket cannot satisfy the condition that the first message is discharged from the queue may include that the number of remaining tokens in the first token bucket is less than the number of bytes of the first message.
  • the first threshold may be the threshold for enabling the second token bucket.
  • the first device may first determine that the length of the cached packet in the queue is greater than that of the second token bucket. a threshold.
  • the length of the packets buffered in the queue may include the total number of bytes of all packets buffered in the queue, then the unit of the first threshold may be a bit (Byte).
  • the length of the packets buffered in the queue may also include the total number of packets buffered in the queue.
  • the first device determines that the number of remaining tokens in the first token bucket can meet the condition for dequeuing the first packet. Then the first device may issue the token of the first token bucket for the first message, and dequeue the first token bucket. Wherein, issuing the token of the first token bucket for the first message is for processing the first message in a processing mode corresponding to the first token bucket in the subsequent processing process. Optionally, issuing the token of the first token bucket for the first message may include adding a mark corresponding to the first token bucket to the first message, or may include recording the first message as passing the first token The bucket dispatches the packets that are dequeued. The introduction of the processing method corresponding to the token in the first token bucket can be referred to later, and will not be repeated here.
  • the first device may adjust the remaining tokens in the first token bucket according to the first message after the first message is dequeued quantity. For example, the first device may remove some or all tokens from the first token bucket, and the number of removed tokens may be equal to the number of bytes of the target packet. Assuming that the number of tokens represents the number of bytes that can be scheduled, the first device may remove i tokens from the first token bucket after scheduling packets with i bytes out of the queue.
  • the first device may issue a token of the second token bucket for the first packet, and discharge the first packet from the queue.
  • the first threshold is a threshold for the first device to enable the second token bucket for packet scheduling. If the length of the packets buffered in the queue is greater than or equal to the first threshold, the first device may enable the second token bucket to schedule the packets buffered in the queue to be dequeued.
  • the first threshold may represent the packet length that may cause network congestion, for example, it may be 50% of the aforementioned queue threshold.
  • the first threshold may be determined according to a burst limit that can be tolerated by a processing manner corresponding to the first token bucket. That is to say, the processing mode corresponding to the first token bucket can tolerate the existence of packets that are less than or equal to the first threshold and fail to be scheduled.
  • the length of the packets buffered in the queue may refer to the number of packets buffered in the queue, or may refer to the total number of bytes of the packets buffered in the queue.
  • the first device issues tokens in the second token bucket for the first message in order to process the first message in a processing mode corresponding to the second token bucket during subsequent processing. deal with.
  • issuing the token of the second token bucket for the first message may include adding a mark corresponding to the second token bucket for the first message, or may include recording the first message as passing the second token
  • the bucket dispatches the packets that are dequeued.
  • the introduction of the processing method corresponding to the token in the second token bucket can be referred to later, and will not be repeated here.
  • the first device can adjust the second token according to the first packet after the first packet is dequeued. The number of remaining tokens in the token bucket.
  • the number of remaining tokens in the second token bucket may not satisfy the first message discharge queue, for example, the number of remaining second tokens in the second token bucket may be less than that of the first message number of bytes.
  • the first device may not queue the first packet. That is to say, when the number of remaining tokens in the first token bucket and the number of remaining tokens in the second token bucket do not meet the requirement of dequeueing the first message, the first device may send the first message to remain in the queue. Since the number of remaining tokens in the token bucket can gradually increase with time, the first device can wait for the accumulation of the remaining tokens in the first token bucket (or the second token bucket) to meet the first message discharge requirement. After checking the condition of the queue, the first message is discharged into the queue.
  • the first threshold is a threshold for the first device to enable the second token bucket. Then, if the buffered message length in the queue of the first device is lower than the first threshold, it means that the queue has not reached the threshold for the first device to enable the second token bucket, and the first device may not schedule through the second token bucket Packets are dequeued. It can be understood that even if the number of remaining tokens in the first token bucket does not meet the condition for the first packet to be discharged from the queue, if the length of the cached packet in the queue is lower than the first threshold, the first device may not schedule the first packet. The message is discharged from the queue, and the first message is kept in the buffer queue.
  • the first device may also determine whether to use the second token bucket to schedule the first packet out of the queue according to the length of the packet buffered in the queue. In this way, by judging whether the length of the packet cached in the queue is smaller than the first threshold, it is possible to prevent the first device from directly dispatching packets through the second token bucket to discharge the queue when the number of remaining tokens in the first token bucket is insufficient. , to avoid frequent switching between the first token bucket and the second token bucket, thereby improving the utilization rate of the processing mode corresponding to the first token bucket.
  • the rate at which packets are added to the queue fluctuates up and down based on the first preset rate.
  • the rate at which messages are added to the queue is higher than the first preset rate, the number of remaining tokens in the first token bucket may not be sufficient for the message to be discharged from the queue, so the first device can send the messages that have entered the queue but cannot be scheduled remain in the queue.
  • the rate at which messages are added to the queue is lower than the first preset rate, the messages added to the queue cannot fully consume the tokens injected into the first token bucket, then the remaining tokens in the first token bucket can be used for scheduling queues The packets buffered in the queue are dequeued, thereby reducing the amount of packets buffered in the queue.
  • the above describes the method for the first device to schedule the first message to be dequeued.
  • the following describes the method for the first device to process the first message according to the token issued by the first message after the first message is dequeued.
  • the first device may determine the forwarding path of the first message according to the issued token of the first message, or may determine the forwarding priority of the first message according to the issued token of the first message. grade.
  • the two implementation methods are introduced respectively below.
  • the first device determines the forwarding path of the first message according to the issued token of the first message. That is to say, the tokens in the first token bucket correspond to one forwarding path, and the tokens in the second token bucket may correspond to another forwarding path.
  • the forwarding path corresponding to the token in the first token bucket may be called the second path, and the forwarding path corresponding to the token in the second token bucket may be called the first path.
  • the first device may preferentially schedule packets to be dequeued through the first token bucket. That is to say, the network device may preferentially select the second path to forward the packet. Then, the second path corresponding to the first token bucket may be a path with higher link quality. That is, the link quality of the second path may be higher than that of the first path.
  • the first device can determine the forwarding path of the first message from the first path and the second path according to the token issued by the first message, and then determine to send the first message. the outgoing interface of the packet, and send the first packet through the corresponding outgoing interface. If the token of the first token bucket is issued to the first packet, the first device may forward the first packet through the outgoing interface corresponding to the second path. If the token of the second token bucket is issued to the first packet, the first device may forward the first packet through the outgoing interface corresponding to the first path.
  • the first device is the network device 121 in FIG. 1
  • the first packet is the packet X sent by the device 111 to the device 112 .
  • the network device 121 can receive the message X through the network interface A1, and forward the message through the network interface A2 or the network interface A3.
  • the method provided by the embodiment of the present application can be applied to the incoming interface of the network device 121, that is, the network interface A1, for forwarding the message X according to the condition of the token bucket and the queue after the network device 121 receives the message X.
  • the path "network device 121 ⁇ network device 123 ⁇ network device 124 ⁇ network device 125" may The link quality of the path is low, for example, the delay value of the path may be high. Therefore, you can
  • the path "network device 121 ⁇ network device 123 ⁇ network device 124 ⁇ network device 125" is determined as the first path, and the path “network device 121 ⁇ network device 122" is determined as the second path.
  • network device 121 can schedule message X to be dequeued through network interface A2, so that message X passes through the path "network device 121 ⁇ network device 122" forwarding; if message X is issued the token of the second token bucket, network device 121 can schedule message X to go out of the queue through network interface A3, so that message X passes through the path "network device 121 ⁇ network equipment 123 ⁇ network equipment 124 ⁇ network equipment 125 " forwarding.
  • the network device 121 can forward packets through the path "network device 121 ⁇ network device 122" with a higher link quality.
  • the network device 121 can share the pressure of the path "network device 121 ⁇ network device 122" through the path "network device 121 ⁇ network device 123 ⁇ network device 124 ⁇ network device 125".
  • frequent switching between the first token bucket and the second token bucket can also be avoided, and traffic shaping of the packet flow can be realized.
  • the network type of the access network where the first path is located may be different from the network type of the access network where the second path is located. That is to say, the first device may select different access networks to forward the message according to the situation of the message entering the queue. Then, the performance of the access network where the second path is located may be better than that of the access network where the first path is located. The first device preferentially forwards the first message through the second path, which can improve the QoS parameter of the first message or reduce the forwarding cost of the message.
  • the hybrid access aggregation point (Hybrid Access Aggregation Point, HAAP) in the hybrid access (Hybrid Access, HA) scenario can support users to bind Digital Subscriber Line (Digital Subscriber Line, DSL) and/or Long Term Evolution (Long Term Evolution (Long Term Evolution) Term Evolution, LTE) two access networks.
  • the message can be transmitted through the path corresponding to the DSL first, and then transmitted through the path corresponding to the LTE when the bandwidth of the DSL is insufficient.
  • the foregoing first path may be a path corresponding to LTE
  • the second path may be a path corresponding to DSL.
  • the network type of the network where the first path is located may be the same as the network type of the network where the second path is located, but the cost of forwarding packets through the network where the first path is located is the same as the cost of forwarding packets through the network where the second path is located.
  • Text costs may vary.
  • the bandwidth of the first path may be greater than the bandwidth of the second path. In this way, during the process of forwarding packets, the first device can preferentially use the second path with a smaller bandwidth to forward packets, and reuse The first queue forwards packets. In this way, the utilization rate of the second path can be improved, thereby reducing the cost of packet transmission.
  • the first device determines the forwarding priority of the first message according to the issued token of the first message.
  • the forwarding priority is used to instruct the network device forwarding the first packet to forward the first packet.
  • the first token bucket and the second token bucket respectively correspond to different forwarding priority levels.
  • the forwarding priorities set by the first device for the first message are also different.
  • the forwarding priority level corresponding to the token in the first token bucket is called the second priority level
  • the forwarding priority level corresponding to the token in the second token bucket is called the first priority level.
  • the forwarding priority of the first priority is lower than the forwarding priority of the second priority. That is, during the process of forwarding packets, the device can preferentially forward packets with the first priority.
  • the first device may forward the first packet according to the forwarding priority of the first packet.
  • the first device may add a mark for indicating the forwarding priority to the first packet, and then send the marked first packet to the next hop device.
  • the next-hop device can determine the forwarding priority level of the first message according to the label in the first message, so as to select the forwarding mode corresponding to the forwarding priority level to forward the first message.
  • the first packet is a packet Y sent from the device 111 to the device 112 .
  • the network device 121 may determine the forwarding priority of the message Y according to the token issued by the message Y, and add a corresponding mark to the message Y.
  • network device 121 may determine that the forwarding priority of message Y is the second priority, and add a mark corresponding to the second priority to message Y, for example, the network The device can mark message Y as a green (Green) state; assuming that message Y is issued the token of the second token bucket, network device 121 can determine that the forwarding priority of message Y is the first priority, and for A mark corresponding to the first priority is added to the message Y, for example, the network device may mark the message Y as a yellow (Yellow) state.
  • Green green
  • the network device may mark the message Y as a yellow (Yellow) state.
  • network device 121 may send packet Y to network device 122 .
  • a Single Rate Three Color Marker (SRTCM) algorithm or a Two Rate Three Color Marker (Two Rate Three Color Marker, TRTCM) algorithm may be deployed on the network device 122 .
  • SRTCM Single Rate Three Color Marker
  • TRTCM Two Rate Three Color Marker
  • the network device 122 can determine the scheduling mode of the message Y according to the label in the message Y, so as to forward the message Y according to the forwarding priority.
  • the first device may determine the forwarding path of the first message according to the issued token of the first message, or may determine the forwarding priority of the first message according to the issued token of the first message. In some possible implementation manners, the first device may determine the forwarding path and the forwarding priority of the first message according to the issued token of the first message.
  • the network device 121 is used to receive the packet flow from the device 111, and the target device of each packet in the packet flow is the device 112 as an example for illustration.
  • the tokens in the first token bucket correspond to the first path, and the tokens in the second token bucket correspond to the second path.
  • FIG. 3 is a flow chart of a method for a message scheduling method provided in an embodiment of the present application, including:
  • the network device 121 may receive the message M sent by the device 111 through the network interface A1. Wherein, after the packet M receives the packet M, the network device 121 may act as the first device to execute the packet scheduling method provided in the embodiment of the present application.
  • S302 The network device 121 judges whether the length of the packet buffered in the queue is not less than a threshold value.
  • the network device 121 may determine whether the length of the message buffered in the queue is not less than the threshold.
  • the queue is a queue for storing messages to be scheduled
  • the threshold value may be the maximum value of the message length that the queue can accommodate, or the maximum value of the message length that the queue can accommodate and the length of the message M. Difference.
  • the threshold value may be represented by the number of packets or the total number of bytes of packets.
  • step S303 If the length of the buffered packets in the queue is greater than or equal to the threshold value, it means that the queue cannot continue to be added with new packets, and the network device 121 may execute step S303. If the length of the packets buffered in the queue is smaller than the threshold, the network device 121 may perform step S304.
  • the network device 121 may also execute step S304 when the length of the message buffered in the queue is equal to the threshold value
  • the network device 121 may discard the packet M.
  • the network device 121 may not discard the message M when the length of the buffered message in the queue is greater than or equal to the threshold value, for example, the network device 121 may store the message M The message M is scheduled in a storage location other than the queue to meet the condition.
  • the network device 121 may add the message M to the queue, and continue to execute step S305.
  • the network device 121 may add the message M to the tail of the queue.
  • S305 The network device 121 judges whether the remaining tokens in the first token bucket meet the target packet discharge queue.
  • the network device 121 may determine whether the remaining tokens in the first token bucket meet the target message discharge queue.
  • the target message is a message buffered in the queue and located at the head of the queue. That is, the target message is the first message that needs to be scheduled among the messages to be scheduled.
  • other messages in the queue remain in the queue and will not be dequeued. It can be understood that, as the target message is dequeued, the network device 121 can determine the message at the head of the queue after the target message is dequeued as a new target message, that is, the target message is always the first one in the queue. Packets to be scheduled.
  • the network device 121 may compare whether the number of remaining tokens in the first token bucket is greater than or equal to the number of bytes of the target packet. If the quantity of the remaining token of the first token bucket is greater than or equal to the number of bytes of the target message, the network device can determine that the remaining token of the first token bucket satisfies the discharge queue of the target message, and executes step S306; if the first The number of remaining tokens in the token bucket is greater than or equal to the number of bytes of the target message, and the network device may determine that the remaining tokens in the first token bucket do not satisfy the target message to be discharged into the queue, and perform step S307.
  • the network device 121 may issue tokens in the first token bucket for the target packet, and schedule the target packet to discharge the queue. Next, the network device 121 may forward the target packet through the second path corresponding to the token in the first token bucket.
  • the second path may be the path "network device 121 ⁇ network device 122".
  • the network device 121 may adjust the number of remaining tokens in the first token bucket. For example, some or all tokens may be removed from the first token bucket, and the number of removed tokens may be equal to the number of bytes of the target packet.
  • the network device 121 may determine the packet at the head of the queue as a new target packet, and return to step S305.
  • S307 The network device 121 judges whether the packet length buffered in the queue is not less than the first threshold.
  • the network device 121 may further determine whether the length of the packet buffered in the queue is smaller than the first threshold. If the packet length buffered in the queue is not less than the first threshold, the network device 121 may execute step S308; if the packet length buffered in the queue is smaller than the first threshold, the network device 121 may execute step S310.
  • S308 The network device 121 judges whether the remaining tokens in the second token bucket meet the target packet discharge queue.
  • the "first threshold" in the embodiment of the present application is the threshold for enabling the second token bucket for packet scheduling. If the length of the packet buffered in the queue is greater than or equal to the first threshold, it indicates that the network device 121 may enable the second token bucket to schedule the packets. In the process of scheduling packets by using the second token bucket, the network device 121 may first determine whether the remaining tokens in the second token bucket meet the target packet discharge queue. For the specific process of the network device 121 judging whether the remaining tokens in the second token bucket satisfy the target packet discharge queue, reference may be made to the descriptions of the foregoing corresponding embodiments, which will not be repeated here.
  • the network device 121 can perform step S309; if the second token bucket The remaining tokens in the second token bucket do not satisfy the target packet discharge queue, which means that the network device 121 cannot schedule the target packet discharge queue through the second token bucket, and then the network device 121 can perform step S310.
  • the network device 121 may issue tokens in the second token bucket for the target packet, and schedule the target packet to discharge the queue. Next, the network device 121 may forward the target packet through the first path corresponding to the token in the second token bucket.
  • the first path may be the path "network device 121 ⁇ network device 123 ⁇ network device 124 ⁇ network device 125".
  • the network device 121 may adjust the number of remaining tokens in the second token bucket after the scheduling target packet is discharged from the queue. For example, some or all tokens may be removed from the second token bucket, and the number of removed tokens may be equal to the number of bytes of the target packet. In addition, after the scheduling target packet is discharged from the queue, the network device 121 may determine the packet at the head of the queue as a new target packet, and return to step S305.
  • S310 The network device 121 keeps the target packet in a queue.
  • the network device 121 can keep the target message in the queue without scheduling the target message discharge queue. Specifically, if the length of the packets buffered in the queue is smaller than the first threshold, the network device 121 does not enable the second token bucket to schedule the packets. Then, because the remaining tokens in the first token bucket do not satisfy the target packet discharge queue, the network device 121 may keep the target packet in the queue and not schedule the target packet discharge queue.
  • the network device 121 may keep the target packet in the queue, and not schedule the target packet to be discharged from the queue.
  • the network device 121 may not enable the second token bucket to process the message. Scheduling, keeping target packets from being queued.
  • the network device 121 may enable the second token bucket to schedule the target packet.
  • the network device 121 adjusts the number of remaining tokens in the first token bucket, and/or adjusts the number of remaining tokens in the smart second token bucket.
  • the network device 121 can adjust the number of remaining tokens in the first token bucket according to the first preset rate, and/or, the network device 121 can adjust the second token according to the second preset rate The number of remaining tokens for the bucket.
  • the network device may periodically inject tokens into the first token bucket and/or the second token bucket.
  • the network device 121 may return to step S305 to determine whether the token bucket satisfies the condition for discharging packets into the queue.
  • step S311 is executed after step S310.
  • step S311 can be executed at any time.
  • the adjustment of the number of remaining token levels in the token bucket has nothing to do with whether the packets are discharged from the queue.
  • the embodiment of the present application further provides a packet scheduling apparatus 400 , which can realize the function of the first device in the embodiment shown in FIG. 2 or FIG. 3 .
  • the device 400 for packet scheduling includes a judging unit 410 and a scheduling unit 420 .
  • the judging unit 410 is configured to implement S201 in the embodiment shown in FIG. 2
  • the scheduling unit 420 is configured to implement S202 in the embodiment shown in FIG. 2 .
  • the judging unit 410 is configured to determine whether the length of the buffered message in the queue is smaller than the first threshold when the number of remaining tokens in the first token bucket does not satisfy the first message discharge queue.
  • a scheduling unit 402 configured to issue a token of the second token bucket for the first message in response to the length of the message buffered in the queue being not less than the first threshold, and send the first message to Drain the queue.
  • each functional unit in the embodiment of the present application may be integrated into one processing unit, or each unit may physically exist separately, or two or more units may be integrated into one unit.
  • the scheduling unit and the judging unit may be the same unit or different units.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • FIG. 5 is a schematic structural diagram of a device 500 provided in an embodiment of the present application.
  • the packet scheduling apparatus 400 above can be realized by the device shown in FIG. 5 .
  • the device 500 includes at least one processor 501 , a communication bus 502 and at least one network interface 504 , and optionally, the device 500 may further include a memory 503 .
  • the processor 501 can be a general-purpose central processing unit (Central Processing Unit, CPU), a specific application integrated circuit (Application-specific Integrated Circuit, ASIC) or one or more integrated circuits (Integrated Circuit) for controlling the execution of the program program of this application , IC).
  • the processor can be used to process the message and the token bucket, so as to implement the message scheduling method provided in the embodiment of the present application.
  • the processor can be used to, when the number of remaining tokens in the first token bucket does not satisfy the first packet discharge queue, determine Whether the length of the message cached in the queue is less than the first threshold; in response to the length of the message cached in the queue is not less than the first threshold, issuing a token of the second token bucket for the first message , and dequeue the first packet from the queue.
  • Communication bus 502 is used to transfer information between processor 501 , network interface 504 and memory 503 .
  • the memory 503 can be a read-only memory (Read-only Memory, ROM) or other types of static storage devices that can store static information and instructions, and the memory 503 can also be a random access memory (Random Access Memory, RAM) or can store information Other types of dynamic storage devices and instructions, it can also be a compact disc (Compact Disc Read-only Memory, CD-ROM) or other optical disc storage, optical disc storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray optical discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto.
  • the memory 503 may exist independently, and is connected to the processor 501 through the communication bus 502 .
  • the memory 503 can also be integrated with the processor 501 .
  • the memory 503 is used to store program codes or instructions for executing the technical solutions provided by the embodiments of the present application, and the execution is controlled by the processor 501 .
  • the processor 501 is used to execute program codes or instructions stored in the memory 503 .
  • One or more software modules may be included in the program code.
  • the processor 501 may also store program codes or instructions for executing the technical solutions provided by the embodiments of the present application. In this case, the processor 501 does not need to read the program codes or instructions from the memory 503 .
  • the network interface 504 can be a device such as a transceiver for communicating with other devices or a communication network, and the communication network can be Ethernet, radio access network (RAN) or wireless local area network (Wireless Local Area Networks, WLAN). In the embodiment of the present application, the network interface 504 may be used to receive messages sent by other nodes in the segment routing network, and may also send messages to other nodes in the segment routing network.
  • the network interface 504 may be an Ethernet interface (Ethernet), a Fast Ethernet (Fast Ethernet, FE) interface or a Gigabit Ethernet (Gigabit Ethernet, GE) interface, etc.
  • the device 500 may include multiple processors, for example, the processor 501 and the processor 505 shown in FIG. 5 .
  • processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • FIG. 6 is a schematic structural diagram of a device 600 provided in an embodiment of the present application.
  • the first device in FIG. 2 or FIG. 3 may be implemented by the device shown in FIG. 6 .
  • the device 600 includes a main control board and one or more interface boards.
  • the main control board is communicatively connected with the interface board.
  • the main control board is also called a main processing unit (Main Processing Unit, MPU) or a route processing card (Route Processor Card).
  • the main control board includes a CPU and a memory. Route calculation, device management and maintenance functions.
  • the interface board is also called a line processing unit (Line Processing Unit, LPU) or a line card (Line Card), which is used to receive and send packets.
  • LPU Line Processing Unit
  • Line Card Line Card
  • the communication between the main control board and the interface board or between the interface board and the interface board is through a bus.
  • the interface boards communicate through a switching fabric board.
  • the device 600 also includes a switching fabric board.
  • the switching fabric board communicates with the main control board and the interface board.
  • the switching fabric board is used to forward the interface board.
  • the data between SFUs can also be called SFUs (Switch Fabric Units, SFUs).
  • the interface board includes a CPU, a memory, a forwarding engine, and an interface card (Interface Card, IC), where the interface card may include one or more network interfaces.
  • the network interface may be an Ethernet interface, an FE interface, or a GE interface.
  • the CPU communicates with the memory, the forwarding engine and the interface card respectively.
  • the memory is used to store the forwarding table.
  • the forwarding engine is used to forward the received message based on the forwarding table stored in the memory. If the destination address of the received message is the IP address of the device 600, the message is sent to the CPU of the main control board or the interface board for further processing. Processing; if the destination address of the received message is not the IP address of the device 600, the forwarding table is checked according to the destination, if the next hop and the outgoing interface corresponding to the destination address are found from the forwarding table, the message is Forward to the outbound interface corresponding to the destination address.
  • the forwarding engine may be a network processor (Network Processor, NP).
  • the interface card is also called a daughter card, which can be installed on the interface board. It is responsible for converting the photoelectric signal into a data frame, and checking the validity of the data frame before forwarding it to the forwarding engine for processing or the CPU of the interface board.
  • the CPU can also perform the function of the forwarding engine, such as implementing soft forwarding based on a general-purpose CPU, so that no forwarding engine is needed in the interface board.
  • the forwarding engine may be implemented by ASIC or Field Programmable Gate Array (Field Programmable Gate Array, FPGA).
  • the memory storing the forwarding table can also be integrated into the forwarding engine as a part of the forwarding engine.
  • the embodiment of the present application also provides a chip system, including: a processor, the processor is coupled with a memory, and the memory is used to store programs or instructions, and when the programs or instructions are executed by the processor, the The chip system implements the packet scheduling method performed by the first device in the embodiment shown in FIG. 2 above.
  • processors in the chip system there may be one or more processors in the chip system.
  • the processor can be realized by hardware or by software.
  • the processor may be a logic circuit, an integrated circuit, or the like.
  • the processor may be a general-purpose processor implemented by reading software codes stored in a memory.
  • the memory can be integrated with the processor, or can be set separately from the processor, which is not limited in this application.
  • the memory can be a non-transitory processor, such as a read-only memory ROM, which can be integrated with the processor on the same chip, or can be respectively arranged on different chips.
  • the setting method of the processor is not specifically limited.
  • the system-on-a-chip can be an FPGA, an ASIC, a system chip (System on Chip, SoC), a CPU, an NP, or a digital signal processing circuit (Digital Signal Processor, DSP), it can also be a microcontroller (Micro Controller Unit, MCU), it can also be a programmable controller (Programmable Logic Device, PLD) or other integrated chips.
  • SoC System on Chip
  • DSP Digital Signal Processor
  • MCU Micro Controller Unit
  • PLD Programmable Logic Device
  • each step in the foregoing method embodiments may be implemented by an integrated logic circuit of hardware in a processor or instructions in the form of software.
  • the method steps disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
  • the embodiment of the present application also provides a computer-readable storage medium, including instructions, which, when run on a computer, cause the computer to execute the packet scheduling method provided by the above method embodiment and performed by the first device.
  • the embodiment of the present application also provides a computer program product including instructions, which, when running on a computer, causes the computer to execute the packet scheduling method provided by the above method embodiment and performed by the first device.
  • the disclosed system, device and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical module division.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be obtained according to actual needs to achieve the purpose of the solution of this embodiment.
  • each module unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software module units.
  • the integrated unit is implemented in the form of a software module unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .
  • the functions described in the present invention may be implemented by hardware, software, firmware or any combination thereof.
  • the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例公开了一种报文调度的方法及装置,用于降低令牌桶切换的频率。其中,所述报文调度方法包括:当第一设备的第一令牌桶的剩余令牌的数量不满足第一报文排出队列,所述第一设备确定所述队列中缓存的报文长度是否小于第一阈值;响应于所述队列中缓存的报文长度不小于所述第一阈值,所述第一设备为所述第一报文发放第二令牌桶的令牌,并将所述第一报文排出所述队列。

Description

一种报文调度的方法及装置
本申请要求于2021年10月28日提交中国国家知识产权局、申请号为202111266385.2、发明名称为“一种报文调度的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种报文调度的方法及装置。
背景技术
服务质量(quality of service,QoS)是反映网络状态的指标,网络状态包括网络延迟、网络阻塞等状态。为了保证网络的QoS,可以将网络设备接收的报文加入队列,并按照一定的策略将队列中缓存的报文排出队列。
结合令牌桶(Token-Bucket)技术对队列中存储的报文进行调度,可以对流量的突发进行整形。具体地,可以通过限制令牌注入令牌桶的速率,起到限制报文排出队列的速率的作用。如果报文加入队列的速率达到或超过令牌注入令牌桶的速率,队列中的报文可以被暂缓调度。如此,实现了对突发的整形。
进一步地,通过多个令牌桶的组合,可以进一步降低突发的影响。具体地,如果报文加入队列的速率较慢,可以通过一个令牌桶调度报文排出队列,如果报文加入队列的速率较快,可以通过多个令牌桶调度报文排出队列。但是,如果利用多个令牌桶对队列中的报文进行调度,可能存在令牌桶切换频繁的问题。
发明内容
本申请实施例提供了一种报文调度的方法及装置,旨在降低令牌桶切换的频率。
第一方面,本申请实施例提供了一种报文调度的方法,该方法可以应用于第一设备,第一设备可以是路由器、交换机等网络设备,也可以是其他用于对报文进行调度的设备。所述方法包括如下步骤:在对第一报文进行调度之前,第一设备可以先判断第一设备的第一令牌桶的剩余令牌的数量是否满足第一报文排出队列。如果第一令牌桶的剩余令牌的数量不满足第一报文排出队列,第一设备可以判断队列中缓存的报文长度是否小于第一阈值。其中,第一阈值为第一设备启用第二令牌桶的门限值。如果队列中缓存的报文长度大于或等于第一阈值,第一设备确定队列中缓存的报文长度达到启用第二令牌的门限值,那么第一设备可以启用第二令牌桶对第一报文进行调度。第一设备可以为第一报文发放第二令牌桶的令牌,并将第一报文排出队列。这样,除了第一令牌桶的状态以外,第一设备还可以根据队列中缓存的报文长度,判断是否采用第二令牌桶调度第一报文出队。如此,通过判断队列中缓存的报文长度是否小于第一阈值,可以避免第一设备在第一令牌桶的剩余令牌的数量不足的情况下直接通过第二令牌桶调度报文排出队列,避免第一令牌桶和第二令牌桶的频繁切换,从而提高第一令牌桶对应的处理方式的利用率。
在一种可能的设计中,第一令牌桶和第二令牌桶可以对应不同的路径。具体地,第一令牌桶可以对应第二路径,第二令牌桶可以对应第一路径。第一路径为链路质量低于第二路径的链路质量的路径。那么,在为第一报文发放第二令牌桶的令牌并将第一报文排出队 列之后,第一设备可以通过第一路径发送第一报文。这样,在报文加入队列的速度较低的情况下,第一令牌桶的剩余令牌的数量满足第一报文排出队列,第一设备可以通过链路质量较高的第二路径发送第一报文。在报文加入队列的速度较高的情况下,第一令牌桶的剩余令牌的数量不满足第一报文排出队列,且队列中缓存的报文长度不小于第一阈值,第一设备可以通过链路质量较低的第一路径发送第一报文。
在一种可能的设计中,第一路径所在的接入网络的网络类型和第二路径所在的接入网络的网络类型不同。即,本申请实施例提供的报文调度的方法可以应用于混合接入的应用场景。
在一种可能的设计中,第一令牌桶和第二令牌桶可以对应不同的转发优先等级。具体地,第一令牌桶可以对应第一优先级,第二令牌桶可以对应第二优先级。第一优先级的转发优先等级可以低于第二优先级的转发优先等级。那么,在为第一报文发放第二令牌桶的令牌并将第一报文排出队列之后,第一设备可以根据第一优先级发送第一报文。这样,在报文加入队列的速度较低的情况下,第一令牌桶的剩余令牌的数量满足第一报文排出队列,第一设备可以通过转发优先等级较高的第二优先级发送第一报文。在报文加入队列的速度较高的情况下,第一令牌桶的剩余令牌的数量不满足第一报文排出队列,且队列中缓存的报文长度不小于第一阈值,第一设备可以通过转发优先等级较低的第一优先级发送第一报文。
在一些可能的设计中,在第一令牌桶不满足报文排出队列,且队列中缓存的报文长度小于第一阈值的情况下,第一设备可以将报文保持在队列中。具体地,在调度第二报文出队之前,第一设备可以判断第一令牌桶的剩余令牌的数量是否满足第二报文排出队列。如果第一令牌桶的剩余令牌的数量不满足第二报文排出队列,第一设备可以进一步判断队列中缓存的报文长度是否小于第一阈值。如果队列中缓存的报文长度小于第一阈值,说明队列中缓存的报文长度小于起用第二令牌桶的门限值,第一设备不利用第二令牌桶调度第二报文出队。那么第一设备可以将第二报文保持在队列中。可选地,如果第一令牌桶的剩余令牌的数量满足第二报文排出队列,或队列中缓存的报文长度大于或等于第一阈值,第一设备可以调度第二报文排出队列。
在一些可能的设计中,如果第一令牌桶的剩余令牌能够满足第二队列排出队列,第一设备可以为第二报文发放第一令牌桶的令牌,并将第二报文排出队列。可以理解的是,无论队列中缓存的报文长度与第一阈值的大小关系,如果第一令牌桶的剩余令牌能够满足第二队列排出队列,第一设备可以根据第一令牌桶调度第二报文排出队列。如此,第一设备可以优先使用第一令牌桶调度报文排出队列,提高第一令牌桶对应的处理方法的利用率。
在一些可能的设计中,如果队列中缓存的报文长度大于或等于第一阈值,且第一令牌桶的剩余令牌的数量不满足第二报文排出队列,第一设备可以为第二报文发放第二令牌桶的令牌,并调度第二报文排出队列。
在一些可能的设计中,如果第二报文被发放了第一令牌桶的令牌,第一设备可以根据前述第二路径发送第二报文。或者,如果第二报文被发放了第一令牌桶的令牌,第一设备可以根据前述第二优先级发送第二报文。
在一些可能的设计中,第一设备可以在满足触发条件的情况下判断第一令牌桶的剩余令牌的数量是否满足第一报文排出队列。其中,触发条件可以包括第一令牌桶的令牌数量改变、第二令牌桶的令牌数量改进和第三报文加入队列中的任意一种或多种。
在一些可能的设计中,所述第一设备包括宽带接入服务器(Broadband Remote Access Server,BRAS)。
第二方面,本申请实施例提供了一种报文调度的装置,所述装置应用于第一设备,包括:判断单元,用于当第一令牌桶的剩余令牌的数量不满足第一报文排出队列,确定所述队列中缓存的报文长度是否小于第一阈值;调度单元,用于响应于所述队列中缓存的报文长度不小于所述第一阈值,为所述第一报文发放第二令牌桶的令牌,并将所述第一报文排出所述队列。
在一些可能的设计中,所述装置还包括发送单元,所述发送单元,用于通过第一路径发送所述第一报文,所述第一路径是所述第二令牌桶对应的转发路径,所述第一路径的链路质量低于第二路径的链路质量,所述第二路径是所述第一令牌桶对应的转发路径。
在一些可能的设计中,所述第一路径所在的接入网络的网络类型与所述第二路径所在的接入网络的网络类型不同。
在一些可能的设计中,所述装置还包括发送单元,所述发送单元,用于根据第一优先级发送所述第一报文,所述第一优先级是所述第二令牌桶对应的转发优先等级,所述第一优先级的转发优先等级低于第二优先级的转发优先等级,所述第二优先级的转发优先等级是所述第一令牌桶对应的转发优先等级。
在一些可能的设计中,所述判断单元,还用于当所述第一令牌桶的剩余令牌的数量不满足第二报文排出队列,确定所述队列中缓存的报文长度是否小于所述第一阈值;所述调度单元,还用于响应于所述队列中缓存的报文长度小于所述第一阈值,在所述第一令牌桶的剩余令牌的数量能够满足所述第二报文排出所述队列之前,将所述第二报文保持在所述队列中。
在一些可能的设计中,所述判断单元,还用于确定所述第一令牌桶的剩余令牌的数量满足所述第二报文排出队列;所述调度单元,还用于为所述第二报文发放所述第一令牌桶的令牌,并将所述第二报文排出队列。
在一些可能的设计中,所述装置还包括发送单元,所述发送单元,用于通过第二路径发送所述第二报文,所述第二路径是所述第一令牌桶对应的转发路径。
在一些可能的设计中,所述装置还包括发送单元,所述发送单元,用于根据第二优先级发送所述第二报文,所述第二优先级是所述第一令牌桶对应的转发优先等级。
在一些可能的设计中,所述调度单元,还用于响应于所述第三报文加入队列,判断所述第一令牌桶的剩余令牌的数量是否满足第一报文排出队列。
第三方面,本申请实施例提供了一种第一设备,,所述第一设备包括处理器和存储器,所述存储器用于存储指令或程序代码,所述处理器用于从存储器中调用并运行所述指令或程序代码,以执行如前述第一方面所述的报文调度的方法。
第四方面,本申请实施例提供了一种芯片,包括存储器和处理器,存储器用于存储指 令或程序代码,处理器用于从存储器中调用并运行该指令或程序代码,以执行如前述第一方面所述的报文调度的方法。
第五方面,本申请实施例提供了一种计算机可读存储介质,包括指令、程序或代码,当其在计算机上执行时,使得所述计算机执行如前述第一方面所述的报文调度的方法。
附图说明
图1为本申请实施例提供的系统100的一种架构示意图;
图2为本申请实施例提供的报文调度的方法的一种流程示意图;
图3为本申请实施例提供的报文调度的方法的一种方法流程图;
图4为本申请实施例提供的一种报文调度的装置400的结构示意图;
图5为本申请实施例提供的一种设备500的结构示意图;
图6为本申请实施例提供的一种设备600的结构示意图。
具体实施方式
下面结合附图对传统技术和本申请实施例提供的报文调度的方法进行介绍。
参见图1,该图为本申请实施例提供的系统100的一种结构示意图。在系统100中,包括设备111、设备112、网络设备121、网络设备122、网络设备123、网络设备124和网络设备125。其中,网络设备121分别与设备111、网络设备122和网络设备123连接,网络设备124分别与网络设备123和网络设备125连接,设备112分别与网络设备122和网络设备125连接。其中,网络设备121通过网络接口A1与设备111连接,网络设备121通过网络接口A2与网络设备122连接,网络设备121通过网络接口A3与网络设备123连接。通过系统中的网络设备,设备之间可以相互发送报文。例如,设备111可以通过路径“网络设备121→网络设备122”向设备112发送报文,也可以通过路径“网络设备121→网络设备123→网络设备124→网络设备125”向设备112发送报文。
为了避免出现流量突发,可以在网络设备121中配置一个或多个令牌桶,用来判断网络设备121的队列中的报文能否被排出队列。如果令牌桶的剩余令牌的数量能够满足报文出队,可以将报文从队列中调度排出队列,并根据该被排出队列的报文对应的令牌桶对报文进行后续处理。
例如,假设网络设备121包括第一令牌桶和第二令牌桶,在调度报文X出队的过程中,网络设备121可以先判断第一令牌桶的剩余令牌的数量是否满足报文X排出队列。如果第一令牌桶的剩余令牌的数量满足报文X排出队列,网络设备121可以为报文X发放第一令牌桶的令牌,并将报文X排出队列。如果第一令牌桶的剩余令牌的数量不满足报文X排出队列,网络设备121可以判断第二令牌桶的剩余令牌的数量是否满足报文X排出队列。如果第二令牌桶的剩余令牌的数量满足报文X排出队列,网络设备121可以为报文X发放第二令牌桶的令牌,并将报文X排出队列。
在报文X排出队列之后,可以根据报文X被发放的令牌对报文X进行处理。例如,在图1所示的应用场景中,如果报文X为设备111向设备112发送的报文,网络设备121可以根据报文X的令牌确定报文的转发路径。如果报文X被发放了第一令牌桶的令牌,网络设备121可以通过路径“网络设备121→网络设备122”发送报文X;如果报文X被发放了 第二令牌桶对应的令牌,网络设备121可以通过路径“网络设备121→网络设备123→网络设备124→网络设备125”发送报文X。
这样,如果网络设备121在较短时间内接收到了较多的报文,网络设备121可以通过两条不同的路径转发报文。如此,可以减少流量突发对网络系统的影响。可以理解的是,即使网络设备121不通过两条不同的路径转发报文,网络设备121仍然可以对被发放第一令牌桶的令牌的报文和被发放第二令牌桶的令牌的报文进行差异化处理,实现流量整形的目标。
但是,传统的采用多个令牌桶进行报文调度的方法可能存在令牌桶切换频繁的问题,进而可能导致令牌桶对应的处理方法被调用的较为频繁,对网络系统产生影响。具体地,如果不同令牌桶对应不同的转发路径,那么频繁的令牌桶切换可能导致路径的实际利用率低于路径能够达到的利用率。
例如,在上述应用场景中,网络设备121可以在报文入队的速率大于令牌注入第一令牌桶的速率的情况下,通过第二令牌桶调度报文出队;在报文入队的速率小于或等于令牌注入第一令牌桶的速率的情况下,通过第一令牌桶调度报文出队。那么,如果报文入队的速率在令牌注入第一令牌桶的速率上下波动,网络设备121在为报文发放令牌桶的令牌时会反复在第一令牌桶和第二令牌桶之间进行多次切换。如果第一令牌桶的令牌和第二令牌桶的令牌分别对应不同的路径,可能导致网络设备121对路径的利用率低于该路径实际能够达到的利用率。
假设令牌以速率V注入网络设备121的第一令牌桶;在t1时刻到t2时刻之间,报文以0.5V的速率加入网络设备121的队列;在t2时刻到t3时刻之间,报文以1.5V的速率加入网络设备121的队列;在t3时刻到t4时刻之间,报文以0.5V的速率加入网络设备121的队列。那么相应地,在t1时刻到t2时刻之间,网络设备121可以通过路径“网络设备121→网络设备122”转发报文,且转发报文的速率与0.5V相对应;在t2时刻到t3时刻之间,网络设备121可以通过路径“网络设备121→网络设备122”转发部分报文,并通过路径“网络设备121→网络设备123→网络设备124→网络设备125”转发剩余的部分报文,且通过路径“网络设备121→网络设备122”转发报文的速率与V相对应,通过路径“网络设备121→网络设备123→网络设备124→网络设备125”转发报文的速率与0.5V相对应;在t3时刻到t4时刻之间,网络设备121可以通过路径“网络设备121→网络设备122”转发报文,且转发报文的速率与0.5V相对应。
可见,由于t1时刻到t2时刻之间,以及t3时刻到t4时刻之前,网络设备121通过路径“网络设备121→网络设备122”转发报文的速率与0.5V相对应,在t1时刻到t4时刻的过程中,网络设备121通过路径“网络设备121→网络设备122”转发报文的平均速率低于V对应的速率。也就是说,在t1时刻到t4时刻的过程中,网络设备121对路径“网络设备121→网络设备122”的利用率并未达到100%,但是网络设备121却可能通过其他路径转发报文,导致网络设备121对路径“网络设备121→网络设备122”的实际利用率低于最大利用率,网络设备121对路径“网络设备121→网络设备122”的利用率较低。
可以理解的是,即使不同令牌桶对应相同的转发路径,如果不同令牌桶对应的处理方 式不同,传统的报文调度方法仍然可能存在令牌桶频繁切换的问题,可能导致部分令牌桶对应的处理方式的实际利用率低于能够达到的理论利用率的问题。
为了解决上述提及的令牌桶切换频繁问题,本申请实施例提供了一种报文调度的方法及装置,旨在降低令牌桶切换的频率,进而提高令牌桶对应的处理方式的利用率。
本申请实施例提供的报文调度的方法可以应用于图1所示的系统。具体地,可以由图1所示实施例中网络设备121、网络设备122、网络设备123、网络设备124和网络设备125中任意一个或多个网络设备执行。其中,所述网络设备可以是具有转发功能的设备,比如:路由器(router)、交换机(switch)等转发设备,还可以是服务器或者终端设备等具有转发功能的设备。具体地,假设本申请实施例提供的报文调度方法由网络设备121执行,该方法例如可以用于对网络设备121通过网络端口A1接收的报文进行调度。可选地,所述设备111和设备112可以是终端设备,也可以是服务器或其他设备。
在一些可能的实现方式中,本申请实施例提供的报文调度方法也可以由接入设备执行。例如,该方法可以由具有BRAS功能的接入设备执行,用于对接入设备接收的、来自终端设备的报文进行调度。
参见图2,该图为本申请实施例提供的报文调度的方法的一种流程示意图,包括:
S201:当第一设备的第一令牌桶的剩余令牌的数量不满足第一报文排出队列,第一设备确定队列中缓存的报文长度是否小于第一阈值。
其中,第一设备可以是网络系统中的网络设备或接入设备。例如,在图1所示的实施例中,第一设备可以是网络设备121、网络设备122、网络设备123、网络设备124和网络设备125中的任意一个。在本申请实施例中,第一设备至少具有第一令牌桶和第二令牌桶两个令牌桶,还具有用于缓存报文的队列。下面分别对令牌桶和队列进行介绍。
第一令牌桶为第一设备所具有的令牌桶,可以用于存储令牌。可选地,第一令牌桶所存储的令牌可以被称为第一令牌。第一设备可以根据第一预设速率向第一令牌桶注入第一令牌。可选地,第一令牌桶可以具有第一门限,表示第一令牌桶能够容纳的令牌的最大数量。在第一令牌桶的剩余令牌的数量达到第一门限之后,第一令牌桶的剩余令牌的数量不再继续增加。
与第一令牌桶类似,第二令牌桶同样为第一设备具有的、用于存储令牌的令牌桶。第二令牌桶所存储的令牌可以被称为第二令牌,第二令牌桶能够容纳的令牌的最大数量可以被称为第二门限,第一设备向第二令牌桶注入第二令牌的速率可以被称为第二预设速率。可以理解的是,假设不同的令牌桶可以对应不同的转发优先等级,那么第二令牌桶的转发优先等级可以低于第一令牌桶的转发优先等级。在调度报文排出队列的过程中,第一设备可以优先通过第一令牌桶调度报文排出队列。
可选地,在一些可能的实现方式中,第一令牌桶可以被称为C令牌桶,第二令牌桶可以被称为P令牌桶。
需要说明的是,本申请实施例中涉及的“令牌桶”和“令牌”等技术特征可以是虚拟 概念,并不代表实体的桶或令牌。例如,在一些可能的实现方式中,可以通过一个浮点数型或整数型的变量表示第一令牌桶的剩余令牌的数量或第二令牌桶的剩余令牌的数量。可选地,前述第一门限和第二门限可以相同,也可以不同;前述第一预设速率和第二预设速率可以相同,也可以不同。
第一设备中缓存报文的队列又可以被称为缓存队列。在第一设备通过网络接口接收到其他设备发送的报文之后,可以先将报文加入缓存队列。缓存队列中存储的报文可以通过第一令牌桶或第二令牌桶排出队列。可选地,缓存队列可以具有队列上限。当缓存队列中缓存的报文量达到队列上限之后,第一设备不再将新接收的报文加入缓存队列。例如第一设备可以将缓存队列中缓存的报文量达到队列上限之后接收的报文丢弃。其中,队列上限可以由第一设备中为缓存队列分配的存储空间决定,也可以由技术人员根据实际应用情况进行划分。报文量可以表示报文的数量或报文的字节总数。
可选地,在接收到新的报文之后,第一设备可以将报文添加在队列的尾部。而在调度报文排出队列的过程中,第一设备可以优先对位于队列头部的报文进行调度。也就是说,第一设备可以根据报文加入队列的时间对报文进行调度,报文加入队列的时间越早,报文被排出队列的时间越早。相应地,后文所述的“第一报文”可以是队列中位于队列头部的首个报文。
在第一设备确定满足判断触发条件之后,第一设备可以判断第一令牌桶的剩余令牌的数量是否满足第一报文排出队列。其中,判断触发条件表示存在将第一报文排出队列的可能。具体地,判断触发条件可以包括第一令牌桶的剩余令牌的数量增加、第二令牌桶的剩余令牌的数量增加、网络设备将新的报文加入队列和网络设备从队列中排出报文中的任意一种或多种。可选地,第一令牌桶的剩余令牌的数量满足第一报文排出队列可以包括第一令牌桶的剩余令牌的数量大于或等于第一报文的字节数。
在本申请实施例中,第一设备可以根据判断结果执行相应的操作,下面分别对两种判断结果对应的两种可能的实现方式进行介绍。
在第一种可能的实现方式中,第一设备确定第一令牌桶的剩余令牌的数量不能够满足第一报文排出队列的条件。那么第一设备可以判断队列中缓存的报文长度是否小于第一阈值,并根据判断结果对第一报文进行调度。其中,第一令牌桶的剩余令牌的数量不能够满足第一报文排出队列的条件可以包括第一令牌桶的剩余令牌的数量小于第一报文的字节数。
在本申请实施例中,第一阈值可以是启用第二令牌桶的阈值,在使用第二令牌桶调度报文排出队列之前,第一设备可以先确定队列中缓存的报文长度大于第一阈值。可选地,队列中缓存的报文长度可以包括队列中缓存的全部报文的字节总数,那么第一阈值的单位可以为比特(Byte)。在一些可能的实现方式中,队列中缓存的报文长度也可以包括队列中缓存的报文的总数量。
在第二种可能的实现方式中,第一设备确定第一令牌桶的剩余令牌的数量能够满足第一报文排出队列的条件。那么第一设备可以为第一报文发放第一令牌桶的令牌,并将第一令牌桶排出队列。其中,为第一报文发放第一令牌桶的令牌,是为了在后续处理过程中采 用与第一令牌桶相对应的处理方式对第一报文进行处理。可选地,为第一报文发放第一令牌桶的令牌可以包括为第一报文添加第一令牌桶对应的标记,也可以包括将第一报文记录为通过第一令牌桶调度出队的报文。关于第一令牌桶的令牌对应的处理方法的介绍可以参见后文,这里不再赘述。
可选地,如果为第一报文发放了第一令牌桶的令牌,第一设备可以在第一报文被排出队列后,根据第一报文调整第一令牌桶的剩余令牌的数量。例如,第一设备可以从第一令牌桶中移除部分或全部令牌,且被移除的令牌的数量可以等于目标报文的字节数。假设令牌的数量表示能够被调度的字节数,那么在调度字节数为i的报文出队之后,第一设备可以从第一令牌桶中移除i个令牌。
S202:响应于队列中缓存的报文长度不小于第一阈值,第一设备为第一报文发放第二令牌桶的令牌,并将第一报文排出队列。
如果第一设备确定队列中缓存的报文长度不小于第一阈值,第一设备可以为第一报文发放第二令牌桶的令牌,并将第一报文排出队列。其中,第一阈值是第一设备启用第二令牌桶进行报文调度的门限值。如果队列中缓存的报文长度大于或等于第一阈值,第一设备可以启用第二令牌桶调度队列中缓存的报文出队。可选地,第一阈值可以表示可能导致网络拥塞的报文长度,例如可以取前述队列门限的50%。或者,所述第一阈值可以根据第一令牌桶对应的处理方式能够容忍的突发限度确定。也就是说,第一令牌桶对应的处理方式能够容忍存在小于或等于第一阈值的报文未能得到调度。
可选地,队列中缓存的报文长度可以指队列中缓存的报文的数量,也可以指队列中缓存的报文的字节总数。
在本申请实施例中,第一设备为第一报文发放第二令牌桶的令牌,是为了在后续处理过程中采用与第二令牌桶相对应的处理方式对第一报文进行处理。可选地,为第一报文发放第二令牌桶的令牌可以包括为第一报文添加第二令牌桶对应的标记,也可以包括将第一报文记录为通过第二令牌桶调度出队的报文。关于第二令牌桶的令牌对应的处理方法的介绍可以参见后文,这里不再赘述。
与第一令牌桶的令牌类似,如果为第一报文发放了第二令牌桶的令牌,第一设备可以在第一报文被排出队列后,根据第一报文调整第二令牌桶的剩余令牌的数量。
在一些可能的情况中,第二令牌桶的剩余令牌的数量可能不满足第一报文排出队列,例如第二令牌桶的剩余的第二令牌的数量可能小于第一报文的字节数。那么对于这种情况,第一设备可以不将第一报文排出队列。也就是说,在第一令牌桶的剩余令牌的数量和第二令牌桶的剩余令牌的数量均不满足第一报文排出队列的情况下,第一设备可以将第一报文保持在队列中。由于令牌桶的剩余令牌的数量可以随着时间逐渐增加,第一设备可以等待第一令牌桶(或第二令牌桶)的剩余令牌的数量累积到能够满足第一报文排出队列的条件之后,再将第一报文排出队列。
在本申请实施例中,第一阈值是第一设备启用第二令牌桶的门限值。那么,如果第一设备的队列中缓存的报文长度低于第一阈值,说明队列未达到第一设备启用第二令牌桶的门限值,第一设备可以不通过第二令牌桶调度报文出队。可以理解的是,即使第一令牌桶 的剩余令牌的数量不满足第一报文排出队列的条件,如果队列中缓存的报文长度低于第一阈值,第一设备可以不调度第一报文排出队列,将第一报文保持在缓存队列中。也就是说,除了第一令牌桶的状态以外,第一设备还可以根据队列中缓存的报文长度,判断是否采用第二令牌桶调度第一报文出队。如此,通过判断队列中缓存的报文长度是否小于第一阈值,可以避免第一设备在第一令牌桶的剩余令牌的数量不足的情况下直接通过第二令牌桶调度报文排出队列,避免第一令牌桶和第二令牌桶的频繁切换,从而提高第一令牌桶对应的处理方式的利用率。
举例说明,假设报文加入队列的速率在以第一预设速率为基础进行上下波动。当报文加入队列的速率高于第一预设速率时,第一令牌桶的剩余令牌的数量可能不满足报文排出队列,那么第一设备可以将入队但无法得到调度的报文保持在队列中。在报文加入队列的速率低于第一预设速率时,加入队列的报文无法充分消耗注入第一令牌桶的令牌,那么第一令牌桶的剩余令牌可以被用于调度队列中缓存的报文出队,从而减少队列中缓存的报文量。这样,相当于利用报文加入队列的速率较低时注入第一令牌桶的令牌,对报文加入队列的速率较高时,队列中无法得到调度的报文进行调度。如此,通过队列作为缓冲,避免了第一令牌桶和第二令牌桶之间的频繁切换,实现了对报文流的流量整形。
上面介绍了第一设备调度第一报文排出队列的方法,下面介绍第一报文被排出队列之后,第一设备根据第一报文被发放的令牌对第一报文进行处理的方法。
在本申请实施例中,第一设备可以根据第一报文被发放的令牌确定第一报文的转发路径,或者可以根据第一报文被发放的令牌确定第一报文的转发优先等级。下面分别对这两种实现方式进行介绍。
在第一种可能的实现方式中,第一设备根据第一报文被发放的令牌确定第一报文的转发路径。也就是说,第一令牌桶的令牌对应一条转发路径,而第二令牌桶的令牌可能对应另一条转发路径。在本申请实施例中,第一令牌桶的令牌对应的转发路径可以被称为第二路径,第二令牌桶的令牌对应的转发路径可以被称为第一路径。
根据前文介绍可知,在报文加入队列的速率较低的情况下,第一设备可以优先通过第一令牌桶调度报文出队。也就是说,网络设备可以优先选择第二路径转发报文。那么,与第一令牌桶对应的第二路径可以是链路质量较高的路径。即,第二路径的链路质量可以高于第一路径的链路质量。
相应地,在将第一报文排出队列之后,第一设备可以根据第一报文被发放的令牌从第一路径和第二路径中确定第一报文的转发路径,进而确定发送第一报文的出接口,并从对应的出接口发送第一报文。如果第一报文被发放了第一令牌桶的令牌,第一设备可以通过第二路径对应的出接口转发第一报文。如果第一报文被发放了第二令牌桶的令牌,第一设备可以通过第一路径对应的出接口转发第一报文。
下面以图1为例进行说明。假设第一设备为图1中的网络设备121,第一报文为设备111向设备112发送的报文X。从图1所示的系统结构图可知,网络设备121可以通过网络接口A1接收报文X,并通过网络接口A2或网络接口A3转发报文。相应地,本申请实施 例提供的方法可以应用于网络设备121的入接口,即网络接口A1,用于在网络设备121接收到报文X后根据令牌桶和队列的情况转发报文X。
可选地,由于路径“网络设备121→网络设备123→网络设备124→网络设备125”经过的网络设备较多,可能导致路径“网络设备121→网络设备123→网络设备124→网络设备125”的链路质量较低,例如该路径的时延值可能较高。因此,可以
将路径“网络设备121→网络设备123→网络设备124→网络设备125”确定为第一路径,将路径“网络设备121→网络设备122”确定为第二路径。
在将报文X排出队列以后,如果报文X被发放了第一令牌桶的令牌,网络设备121可以通过网络接口A2调度报文X出队,以使报文X通过路径“网络设备121→网络设备122”转发;如果报文X被发放了第二令牌桶的令牌,网络设备121可以通过网络接口A3调度报文X出队,以使报文X通过路径“网络设备121→网络设备123→网络设备124→网络设备125”转发。如此,在报文入队的速率较低的情况下,网络设备121可以通过链路质量较高的路径“网络设备121→网络设备122”转发报文。随着报文入队速率的提高,网络设备121可以通过路径“网络设备121→网络设备123→网络设备124→网络设备125”分担路径“网络设备121→网络设备122”的压力。另外,通过队列和第一阈值进行缓冲,还可以避免第一令牌桶和第二令牌桶之间的频繁切换,实现对报文流的流量整形。
在一些可能的实现方式中,第一路径所在的接入网络的网络类型与第二路径所在的接入网络的网络类型可能不同。也就是说,第一设备可以根据报文入队的情况选择不同的接入网络转发报文。那么,第二路径所在的接入网络在性能上可能优于第一路径所在的接入网络。第一设备优先通过第二路径转发第一报文,可以提升第一报文的QoS参数,或降低报文的转发成本。
例如,对于混合接入(Hybrid Access,HA)场景下的混合接入汇聚节点(Hybrid Access Aggregation Point,HAAP)可以支持用户绑定数字用户线路(Digital Subscriber Line,DSL)和/或长期演进(Long Term Evolution,LTE)两种接入网络。对于用户的部分业务而言,报文可以优先通过DSL对应的路径传输,在DSL带宽不足的情况下再通过到LTE对应的路径传输。那么,前述第一路径可以是LTE对应的路径,第二路径可以是DSL对应的路径。
在一些其他可能的实现方式中,第一路径所在网络的网络类型与第二路径所在网络的网络类型可能相同,但是通过第一路径所在网络转发报文的成本与通过第二路径所在网络转发报文的成本可能不同。例如,第一路径的带宽可以大于第二路径的带宽。这样,在转发报文的过程中,第一设备可以优先利用带宽较小的第二路径转发报文,并在报文入队速率较高且第二路径的带宽占用较多的情况下再利用第一队列转发报文。如此,可以提高第二路径的利用率,从而减小报文传输的成本。
在第二种可能的实现方式中,第一设备根据第一报文被发放的令牌确定第一报文的转发优先等级。其中,转发优先等级用于指示转发第一报文的网络设备对第一报文进行转发。第一令牌桶和第二令牌桶分别对应不同的转发优先等级。随着第一报文被发放的令牌不同,第一设备为第一报文设置的转发优先等级也不同。在本申请实施例中,第一令牌桶的令牌对应的转发优先等级被称为第二优先级,第二令牌桶的令牌对应的转发优先等级被称为第 一优先级。可选地,第一优先级的转发优先等级低于第二优先级的转发优先等级。即,在转发报文的过程中,设备可以优先转发第一优先级的报文。
在确定第一报文的转发优先等级之后,第一设备可以根据第一报文的转发优先等级对第一报文进行转发。可选地,第一设备可以在第一报文中添加用于指示转发优先等级的标记,接着再向下一跳设备发送标记后的第一报文。这样,下一跳设备可以根据第一报文中的标记确定第一报文的转发优先等级,从而选择转发优先等级对应的转发方式对第一报文进行转发。
举例说明。假设本申请实施例提供的方法应用于图1中的网络设备121,第一报文为设备111向设备112发送的报文Y。网络设备121在调度报文Y排出队列之后,可以根据报文Y被发放的令牌确定报文Y的转发优先等级,并为报文Y添加对应的标记。假设报文Y被发放了第一令牌桶的令牌,网络设备121可以确定报文Y的转发优先等级为第二优先级,并为报文Y添加第二优先级对应的标记,例如网络设备可以将报文Y标记为绿色(Green)状态;假设报文Y被发放了第二令牌桶的令牌,网络设备121可以确定报文Y的转发优先等级为第一优先级,并为报文Y添加第一优先级对应的标记,例如网络设备可以将报文Y标记为黄色(Yellow)状态。
在为报文Y添加标记之后,网络设备121可以向网络设备122发送报文Y。网络设备122上可以部署有单速率三颜色标记(Single Rate Three Color Marker,SRTCM)算法或双速率三颜色标记(Two Rate Three Color Marker,TRTCM)算法。通过SRTCM算法或TRTCM算法,网络设备122可以根据报文Y中的标记确定报文Y的调度方式,从而根据转发优先等级对报文Y进行转发。
在上述介绍中,第一设备可以根据第一报文被发放的令牌确定第一报文的转发路径,或者可以根据第一报文被发放的令牌确定第一报文的转发优先等级。在一些可能的实现方式中,第一设备可以根据第一报文被发放的令牌确定第一报文的转发路径和转发优先等级。
为便于理解,下面以网络设备121用于接收来自设备111的报文流,且报文流中各个报文的目标设备为设备112为例进行说明。其中,第一令牌桶的令牌与第一路径相对应,第二令牌桶的令牌与第二路径相对应。
参见图3,图3为本申请实施例提供的报文调度的方法的一种方法流程图,包括:
S301:网络设备121接收报文M。
如图1所示,网络设备121可以通过网络接口A1接收设备111发送的报文M。其中,报文M在接收到报文M之后,网络设备121可以作为第一设备执行本申请实施例提供的报文调度的方法。
S302:网络设备121判断队列中缓存的报文长度是否不小于门限值。
在接收到报文M之后,网络设备121可以判断队列中缓存的报文长度是否不小于门限值。其中,队列是用于存储待调度的报文的队列,门限值可以是队列能够容纳的报文长度的最大值,或者是队列能够容纳的报文长度的最大值与报文M的长度之差。可选地,所述门限值可以通过报文个数或报文字节总数表示。
如果队列中缓存的报文长度大于或等于门限值,说明队列无法继续被添加新的报文,网络设备121可以执行步骤S303。如果队列中缓存的报文长度小于门限值,网络设备121可以执行步骤S304。
可选地,如果门限值为队列能够容纳的报文长度的最大值与报文M的长度之差,那么网络设备121也可以在队列中缓存的报文长度等于门限值时执行步骤S304
S303:网络设备121丢弃报文M。
如果队列中缓存的报文长度不小于门限值,说明队列中缓存的报文等于或接近队列能够容纳的最大限度,队列无法继续容纳其他报文,导致报文M无法加入队列。那么网络设备121可以丢弃报文M。
可选地,在一些可能的实现方式中,网络设备121也可以在队列中缓存的报文长度大于或等于门限值的情况下不丢弃报文M,例如网络设备121可以将报文M存储在异于队列以外的其他存储位置,以待满足条件的情况下对报文M进行调度。
S304:网络设备121将报文M加入队列。
如果队列中缓存的报文长度小于门限值,说明队列中缓存的报文未达到队列能够容纳的最大限度,队列可以继续容纳其他报文。那么网络设备121可以将报文M加入队列,并继续执行步骤S305。可选地,网络设备121可以将报文M添加到队列的尾部。
S305:网络设备121判断第一令牌桶的剩余令牌是否满足目标报文排出队列。
在报文M被加入队列之后,网络设备121可以判断第一令牌桶的剩余令牌是否满足目标报文排出队列。其中,目标报文为队列中缓存的、位于队列头部的报文。即,目标报文为待调度的报文中首个需要被调度的报文。可选地,在目标报文被排出队列之前,队列中其他报文保持在队列中不被排出队列。可以理解的是,随着目标报文被排出队列,网络设备121可以将目标报文出队后位于队列头部的报文确定为新的目标报文,即目标报文始终为队列中首个待调度的报文。
可选地,网络设备121可以比较第一令牌桶的剩余令牌的数量是否大于或等于目标报文的字节数。如果第一令牌桶的剩余令牌的数量大于或等于目标报文的字节数,网络设备可以确定第一令牌桶的剩余令牌满足目标报文排出队列,执行步骤S306;如果第一令牌桶的剩余令牌的数量大于或等于目标报文的字节数,网络设备可以确定第一令牌桶的剩余令牌不满足目标报文排出队列,执行步骤S307。
关于网络设备121判断第一令牌桶的剩余令牌是否满足目标报文排出队列的具体过程可以参见图2对应实施例的描述,这里不再赘述。
S306:网络设备121通过第二路径转发目标报文。
若确定第一令牌桶的剩余令牌满足目标报文排出队列,网络设备121可以为目标报文发放第一令牌桶的令牌,调度目标报文排出队列。接着,网络设备121可以通过第一令牌桶的令牌对应的第二路径转发目标报文。其中,在图1所示的应用场景中,第二路径可以是路径“网络设备121→网络设备122”。关于网络设备121确定转发路径的介绍可以参见前文,这里不再赘述。
在调度目标报文排出队列之后,网络设备121可以调整第一令牌桶的剩余令牌的数量。 例如可以从第一令牌桶中移除部分或全部令牌,且被移除的令牌的数量可以等于目标报文的字节数。
在调度目标报文排出队列之后,网络设备121可以将位于队列头部的报文确定为新的目标报文,并返回执行步骤S305。
S307:网络设备121判断队列中缓存的报文长度是否不小于第一阈值。
若确定第一令牌桶的剩余令牌不满足目标报文排出队列,网络设备121可以进一步判断队列中缓存的报文长度是否小于第一阈值。如果队列中缓存的报文长度不小于第一阈值,网络设备121可以执行步骤S308;如果队列中缓存的报文长度小于第一阈值,网络设备121可以执行步骤S310。
S308:网络设备121判断第二令牌桶的剩余令牌是否满足目标报文排出队列。
根据前文介绍可知,本申请实施例中的“第一阈值”为启用第二令牌桶进行报文调度的门限值。如果队列中缓存的报文长度大于或等于第一阈值,说明网络设备121可以启用第二令牌桶对报文进行调度。在利用第二令牌桶对报文进行调度的过程中,网络设备121可以先判断第二令牌桶的剩余令牌是否满足目标报文排出队列。关于网络设备121判断第二令牌桶的剩余令牌是否满足目标报文排出队列的具体过程可以参见前述对应实施例的描述,这里不再赘述。
如果第二令牌桶的剩余令牌满足目标报文排出队列,说明第二令牌桶允许网络设备121根据第二令牌桶将目标报文排出队列,网络设备121可以执行步骤S309;如果第二令牌桶的剩余令牌不满足目标报文排出队列,说明网络设备121无法通过第二令牌桶调度目标报文排出队列,那么网络设备121可以执行步骤S310。
S309:网络设备121通过第一路径转发目标报文。
若确定第二令牌桶的剩余令牌满足目标报文排出队列,网络设备121可以为目标报文发放第二令牌桶的令牌,调度目标报文排出队列。接着,网络设备121可以通过第二令牌桶的令牌对应的第一路径转发目标报文。其中,在图1所示的应用场景中,第一路径可以是路径“网络设备121→网络设备123→网络设备124→网络设备125”。关于网络设备121确定转发路径的介绍可以参见前文,这里不再赘述。
与步骤S306类似,网络设备121可以在调度目标报文排出队列之后调整第二令牌桶的剩余令牌的数量。例如可以从第二令牌桶中移除部分或全部令牌,且被移除的令牌的数量可以等于目标报文的字节数。另外,在调度目标报文排出队列之后,网络设备121可以将位于队列头部的报文确定为新的目标报文,并返回执行步骤S305。
S310:网络设备121将目标报文保持在队列中。
若队列中缓存的报文长度小于第一阈值,或者第二令牌桶的剩余令牌不满足目标报文排出队列,网络设备121可以将目标报文保持在队列中,不调度目标报文排出队列。具体地,如果队列中缓存的报文长度小于第一阈值,网络设备121不启用第二令牌桶对报文进行调度。那么由于第一令牌桶的剩余令牌不满足目标报文排出队列,网络设备121可以将目标报文保持在队列中,不调度目标报文排出队列。如果第二令牌桶的剩余令牌不满足目标报文排出队列,那么即使队列中缓存的报文长度大于或等于第一阈值,第二令牌桶都不 足以将目标报文排出队列。那么网络设备121可以将目标报文保持在队列中,不调度目标报文排出队列。
需要说明的是,在队列中缓存的报文长度小于第一阈值的情况下,无论第二令牌桶的剩余令牌的数量多少,网络设备121可以不启用第二令牌桶对报文进行调度,保持目标报文不排出队列。
可以理解的是,在一些可能的情况中,随着新的报文入队,队列中缓存的报文长度从小于第一阈值增加至第一阈值或第一阈值以上。那么,在确定队列中缓存的报文长度增加至第一阈值或第一阈值以上之后,网络设备121可以启用第二令牌桶对目标报文进行调度。
S311:网络设备121调整第一令牌桶的剩余令牌的数量,和/或,调和智能第二令牌桶的剩余令牌的数量。
在调度报文的过程中,网络设备121可以根据第一预设速率调整第一令牌桶的剩余令牌的数量,和/或,网络设备121可以根据第二预设速率调整第二令牌桶的剩余令牌的数量。可选地,网络设备可以周期性对向第一令牌桶和/或第二令牌桶中注入令牌。
在向第一令牌桶或第二令牌桶中注入令牌之后,网络设备121可以返回执行步骤S305,判断令牌桶是否满足报文排出队列的条件。
需要说明的是,为了便于说明,在图3给出的流程图中,步骤S311在步骤S310之后执行。事实上令牌桶中令牌的注入与目标报文被保持在队列中不具有顺序关系,即步骤S311可以在任意时刻执行。在实际应用场景中,令牌桶中剩余令牌等级数量的调整与报文是否排出队列无关。
参见图4,本申请实施例还提供了一种报文调度的装置400,该报文调度的装置400可以实现图2或图3所示实施例中第一设备的功能。该报文调度的装置400包括判断单元410和调度单元420。其中,判断单元410用于实现图2所示实施例中的S201,调度单元420用于实现图2所示实施例中的S202。
具体的,判断单元410,用于当第一令牌桶的剩余令牌的数量不满足第一报文排出队列,确定所述队列中缓存的报文长度是否小于第一阈值。
调度单元402,用于响应于所述队列中缓存的报文长度不小于所述第一阈值,为所述第一报文发放第二令牌桶的令牌,并将所述第一报文排出所述队列。
具体执行过程请参考上述图2或图3所示实施例中相应步骤的详细描述,这里不再一一赘述。
需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。本申请实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。例如,上述实施例中,调度单元和判断单元可以是同一个单元,也不同的单元。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
图5是本申请实施例提供的一种设备500的结构示意图。上文中的报文调度装置400 可以通过图5所示的设备来实现。参见图5,该设备500包括至少一个处理器501,通信总线502以及至少一个网络接口504,可选地,该设备500还可以包括存储器503。
处理器501可以是一个通用中央处理器(Central Processing Unit,CPU)、特定应用集成电路(Application-specific Integrated Circuit,ASIC)或一个或多个用于控制本申请方案程序执行的集成电路(Integrated Circuit,IC)。处理器可以用于对报文和令牌桶进行处理,以实现本申请实施例中提供的报文调度的方法。
比如,当图2中的第一设备通过图5所示的设备来实现时,该处理器可以用于,当第一令牌桶的剩余令牌的数量不满足第一报文排出队列,确定所述队列中缓存的报文长度是否小于第一阈值;响应于所述队列中缓存的报文长度不小于所述第一阈值,为所述第一报文发放第二令牌桶的令牌,并将所述第一报文排出所述队列。
通信总线502用于在处理器501、网络接口504和存储器503之间传送信息。
存储器503可以是只读存储器(Read-only Memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备,存储器503还可以是随机存取存储器(Random Access Memory,RAM)或者可存储信息和指令的其它类型的动态存储设备,也可以是只读光盘(Compact Disc Read-only Memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其它磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。存储器503可以是独立存在,通过通信总线502与处理器501相连接。存储器503也可以和处理器501集成在一起。
可选地,存储器503用于存储执行本申请实施例提供的技术方案的程序代码或指令,并由处理器501来控制执行。处理器501用于执行存储器503中存储的程序代码或指令。程序代码中可以包括一个或多个软件模块。可选地,处理器501也可以存储执行本申请实施例提供的技术方案的程序代码或指令,在这种情况下处理器501不需要到存储器503中读取程序代码或指令。
网络接口504可以为收发器一类的装置,用于与其它设备或通信网络通信,通信网络可以为以太网、无线接入网(RAN)或无线局域网(Wireless Local Area Networks,WLAN)等。在本申请实施例中,网络接口504可以用于接收分段路由网络中的其他节点发送的报文,也可以向分段路由网络中的其他节点发送报文。网络接口504可以为以太接口(Ethernet)接口、快速以太(Fast Ethernet,FE)接口或千兆以太(Gigabit Ethernet,GE)接口等。
在具体实现中,作为一种实施例,设备500可以包括多个处理器,例如图5中所示的处理器501和处理器505。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
图6是本申请实施例提供的一种设备600的结构示意图。图2或图3中的第一设备可以通过图6所示的设备来实现。参见图6所示的设备结构示意图,设备600包括主控板和一个或多个接口板。主控板与接口板通信连接。主控板也称为主处理单元(Main Processing Unit,MPU)或路由处理卡(Route Processor Card),主控板包括CPU和存储器,主控板负 责对设备600中各个组件的控制和管理,包括路由计算、设备管理和维护功能。接口板也称为线处理单元(Line Processing Unit,LPU)或线卡(Line Card),用于接收和发送报文。在一些实施例中,主控板与接口板之间或接口板与接口板之间通过总线通信。在一些实施例中,接口板之间通过交换网板通信,在这种情况下设备600也包括交换网板,交换网板与主控板、接口板通信连接,交换网板用于转发接口板之间的数据,交换网板也可以称为交换网板单元(Switch Fabric Unit,SFU)。接口板包括CPU、存储器、转发引擎和接口卡(Interface Card,IC),其中接口卡可以包括一个或多个网络接口。网络接口可以为Ethernet接口、FE接口或GE接口等。CPU与存储器、转发引擎和接口卡分别通信连接。存储器用于存储转发表。转发引擎用于基于存储器中保存的转发表转发接收到的报文,如果接收到的报文的目的地址为设备600的IP地址,则将该报文发送给主控板或接口板的CPU进行处理;如果接收到的报文的目的地址不是设备600的IP地址,则根据该目的地查转发表,如果从转发表中查找到该目的地址对应的下一跳和出接口,将该报文转发到该目的地址对应的出接口。转发引擎可以是网络处理器(Network Processor,NP)。接口卡也称为子卡,可安装在接口板上,负责将光电信号转换为数据帧,并对数据帧进行合法性检查后转发给转发引擎处理或接口板CPU。在一些实施例中,CPU也可执行转发引擎的功能,比如基于通用CPU实现软转发,从而接口板中不需要转发引擎。在一些实施例中,转发引擎可以通过ASIC或现场可编程门阵列(Field Programmable Gate Array,FPGA)实现。在一些实施例中,存储转发表的存储器也可以集成到转发引擎中,作为转发引擎的一部分。
本申请实施例还提供一种芯片系统,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得该芯片系统实现上述图2所示实施例中第一设备执行的报文调度的方法。
可选地,该芯片系统中的处理器可以为一个或多个。该处理器可以通过硬件实现也可以通过软件实现。当通过硬件实现时,该处理器可以是逻辑电路、集成电路等。当通过软件实现时,该处理器可以是一个通用处理器,通过读取存储器中存储的软件代码来实现。
可选地,该芯片系统中的存储器也可以为一个或多个。该存储器可以与处理器集成在一起,也可以和处理器分离设置,本申请并不限定。示例性的,存储器可以是非瞬时性处理器,例如只读存储器ROM,其可以与处理器集成在同一块芯片上,也可以分别设置在不同的芯片上,本申请对存储器的类型,以及存储器与处理器的设置方式不作具体限定。
示例性的,该芯片系统可以是FPGA,可以是ASIC,还可以是系统芯片(System on Chip,SoC),还可以是CPU,还可以是NP,还可以是数字信号处理电路(Digital Signal Processor,DSP),还可以是微控制器(Micro Controller Unit,MCU),还可以是可编程控制器(Programmable Logic Device,PLD)或其他集成芯片。
应理解,上述方法实施例中的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
本申请实施例还提供了一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行以上方法实施例提供的、由第一设备执行的报文调度的方法。
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行以上方法实施例提供的、由第一设备执行的报文调度的方法。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑模块划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要获取其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各模块单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件模块单元的形式实现。
所述集成的单元如果以软件模块单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (21)

  1. 一种报文调度的方法,其特征在于,所述方法包括:
    当第一设备的第一令牌桶的剩余令牌的数量不满足第一报文排出队列,所述第一设备确定所述队列中缓存的报文长度是否小于第一阈值;
    响应于所述队列中缓存的报文长度不小于所述第一阈值,所述第一设备为所述第一报文发放第二令牌桶的令牌,并将所述第一报文排出所述队列。
  2. 根据权利要求1所述的方法,其特征在于,所述将所述第一报文排出所述队列,包括:
    所述第一设备通过第一路径发送所述第一报文,所述第一路径是所述第二令牌桶对应的转发路径,所述第一路径的链路质量低于第二路径的链路质量,所述第二路径是所述第一令牌桶对应的转发路径。
  3. 根据权利要求2所述的方法,其特征在于,所述第一路径所在的接入网络的网络类型与所述第二路径所在的接入网络的网络类型不同。
  4. 根据权利要求1所述的方法,其特征在于,所述将所述第一报文排出所述队列,包括:
    所述第一设备根据第一优先级发送所述第一报文,所述第一优先级是所述第二令牌桶对应的转发优先等级,所述第一优先级的转发优先等级低于第二优先级的转发优先等级,所述第二优先级的转发优先等级是所述第一令牌桶对应的转发优先等级。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    当所述第一设备的所述第一令牌桶的剩余令牌的数量不满足第二报文排出队列,所述第一设备确定所述队列中缓存的报文长度是否小于所述第一阈值;
    响应于所述队列中缓存的报文长度小于所述第一阈值,所述第一设备在所述第一令牌桶的剩余令牌的数量能够满足所述第二报文排出所述队列之前,将所述第二报文保持在所述队列中。
  6. 根据权利要求5所述的方法,其特征在于,在所述将所述第二报文保持在所述队列中之后,所述方法还包括:
    所述第一设备确定所述第一令牌桶的剩余令牌的数量满足所述第二报文排出队列;
    所述第一设备为所述第二报文发放所述第一令牌桶的令牌,并将所述第二报文排出队列。
  7. 根据权利要求6所述的方法,其特征在于,所述将所述第二报文排出所述队列,包括:
    所述第一设备通过第二路径发送所述第二报文,所述第二路径是所述第一令牌桶对应的转发路径。
  8. 根据权利要求6所述的方法,其特征在于,所述将所述第二报文排出所述队列,包括:
    所述第一设备根据第二优先级发送所述第二报文,所述第二优先级是所述第一令牌桶对应的转发优先等级。
  9. 根据权利要求1-8所述的方法,其特征在于,在所述第一设备确定队列中缓存的报文长度是否小于第一阈值之前,所述方法还包括:
    响应于所述第三报文加入队列,所述第一设备判断所述第一令牌桶的剩余令牌的数量是否满足第一报文排出队列。
  10. 一种报文调度的装置,其特征在于,所述装置应用于第一设备,包括:
    判断单元,用于当第一令牌桶的剩余令牌的数量不满足第一报文排出队列,确定所述队列中缓存的报文长度是否小于第一阈值;
    调度单元,用于响应于所述队列中缓存的报文长度不小于所述第一阈值,为所述第一报文发放第二令牌桶的令牌,并将所述第一报文排出所述队列。
  11. 根据权利要求10所述的装置,其特征在于,所述装置还包括发送单元,
    所述发送单元,用于通过第一路径发送所述第一报文,所述第一路径是所述第二令牌桶对应的转发路径,所述第一路径的链路质量低于第二路径的链路质量,所述第二路径是所述第一令牌桶对应的转发路径。
  12. 根据权利要求11所述的装置,其特征在于,所述第一路径所在的接入网络的网络类型与所述第二路径所在的接入网络的网络类型不同。
  13. 根据权利要求10所述的装置,其特征在于,所述装置还包括发送单元,
    所述发送单元,用于根据第一优先级发送所述第一报文,所述第一优先级是所述第二令牌桶对应的转发优先等级,所述第一优先级的转发优先等级低于第二优先级的转发优先等级,所述第二优先级的转发优先等级是所述第一令牌桶对应的转发优先等级。
  14. 根据权利要求10-13任一项所述的装置,其特征在于,
    所述判断单元,还用于当所述第一令牌桶的剩余令牌的数量不满足第二报文排出队列,确定所述队列中缓存的报文长度是否小于所述第一阈值;
    所述调度单元,还用于响应于所述队列中缓存的报文长度小于所述第一阈值,在所述第一令牌桶的剩余令牌的数量能够满足所述第二报文排出所述队列之前,将所述第二报文保持在所述队列中。
  15. 根据权利要求14所述的装置,其特征在于,
    所述判断单元,还用于确定所述第一令牌桶的剩余令牌的数量满足所述第二报文排出队列;
    所述调度单元,还用于为所述第二报文发放所述第一令牌桶的令牌,并将所述第二报文排出队列。
  16. 根据权利要求15所述的装置,其特征在于,所述装置还包括发送单元,
    所述发送单元,用于通过第二路径发送所述第二报文,所述第二路径是所述第一令牌桶对应的转发路径。
  17. 根据权利要求15所述的装置,其特征在于,所述装置还包括发送单元,
    所述发送单元,用于根据第二优先级发送所述第二报文,所述第二优先级是所述第一令牌桶对应的转发优先等级。
  18. 根据权利要求10-17所述的装置,其特征在于,
    所述调度单元,还用于响应于所述第三报文加入队列,判断所述第一令牌桶的剩余令牌的数量是否满足第一报文排出队列。
  19. 一种第一设备,其特征在于,所述第一设备包括处理器和存储器,所述存储器用于存储指令或程序代码,所述处理器用于从存储器中调用并运行所述指令或程序代码,以执行如权利要求1-9任一项所述的报文调度的方法。
  20. 一种芯片,其特征在于,包括存储器和处理器,存储器用于存储指令或程序代码,处理器用于从存储器中调用并运行该指令或程序代码,以执行如权利要求1-9任一项所述的报文调度的方法。
  21. 一种计算机可读存储介质,其特征在于,包括指令、程序或代码,当其在计算机上执行时,使得所述计算机执行如权利要求1-9任一项所述的报文调度的方法。
PCT/CN2022/127533 2021-10-28 2022-10-26 一种报文调度的方法及装置 WO2023072112A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22885989.8A EP4344155A4 (en) 2021-10-28 2022-10-26 PACKAGE PLANNING METHOD AND APPARATUS

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111266385.2 2021-10-28
CN202111266385.2A CN116055407A (zh) 2021-10-28 2021-10-28 一种报文调度的方法及装置

Publications (1)

Publication Number Publication Date
WO2023072112A1 true WO2023072112A1 (zh) 2023-05-04

Family

ID=86126052

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/127533 WO2023072112A1 (zh) 2021-10-28 2022-10-26 一种报文调度的方法及装置

Country Status (3)

Country Link
EP (1) EP4344155A4 (zh)
CN (1) CN116055407A (zh)
WO (1) WO2023072112A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243139A1 (en) * 2010-03-30 2011-10-06 Fujitsu Limited Band control apparatus, band control method, and storage medium
US8045456B1 (en) * 2006-11-27 2011-10-25 Marvell International Ltd. Hierarchical port-based rate limiting
CN109218215A (zh) * 2017-06-29 2019-01-15 华为技术有限公司 一种报文传输的方法和网络设备
CN110572329A (zh) * 2019-07-08 2019-12-13 紫光云技术有限公司 一种网络自适应流量整形方法及系统
WO2021098730A1 (zh) * 2019-11-20 2021-05-27 深圳市中兴微电子技术有限公司 交换网络拥塞管理方法、装置、设备和存储介质
CN113422736A (zh) * 2021-06-16 2021-09-21 中移(杭州)信息技术有限公司 基于令牌桶的请求管理方法、装置、设备及程序产品

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7609634B2 (en) * 2005-03-22 2009-10-27 Alcatel Lucent Communication traffic policing apparatus and methods

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8045456B1 (en) * 2006-11-27 2011-10-25 Marvell International Ltd. Hierarchical port-based rate limiting
US20110243139A1 (en) * 2010-03-30 2011-10-06 Fujitsu Limited Band control apparatus, band control method, and storage medium
CN109218215A (zh) * 2017-06-29 2019-01-15 华为技术有限公司 一种报文传输的方法和网络设备
CN110572329A (zh) * 2019-07-08 2019-12-13 紫光云技术有限公司 一种网络自适应流量整形方法及系统
WO2021098730A1 (zh) * 2019-11-20 2021-05-27 深圳市中兴微电子技术有限公司 交换网络拥塞管理方法、装置、设备和存储介质
CN113422736A (zh) * 2021-06-16 2021-09-21 中移(杭州)信息技术有限公司 基于令牌桶的请求管理方法、装置、设备及程序产品

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP4344155A4
YANG, XU: "An SDN Architecture-based Dynamic Traffic Monitoring Method", DIGITAL COMMUNICATION WORLD, SHU ZI TONG XIN SHI JIE BIAN JI BU, CN, no. 2, 28 February 2021 (2021-02-28), CN , pages 78 - 81, XP009545932, ISSN: 1672-7274 *

Also Published As

Publication number Publication date
EP4344155A4 (en) 2024-05-22
CN116055407A (zh) 2023-05-02
EP4344155A1 (en) 2024-03-27

Similar Documents

Publication Publication Date Title
US9800513B2 (en) Mapped FIFO buffering
AU2002359740B2 (en) Methods and apparatus for network congestion control
US7391786B1 (en) Centralized memory based packet switching system and method
US8040889B2 (en) Packet forwarding device
US8274974B1 (en) Method and apparatus for providing quality of service across a switched backplane for multicast packets
WO2020192358A1 (zh) 一种转发报文的方法和网络设备
CN107948103B (zh) 一种基于预测的交换机pfc控制方法及控制系统
US8310934B2 (en) Method and device for controlling information channel flow
WO2021178012A1 (en) Improving end-to-end congestion reaction using adaptive routing and congestion-hint based throttling for ip-routed datacenter networks
US10728156B2 (en) Scalable, low latency, deep buffered switch architecture
EP3188419B1 (en) Packet storing and forwarding method and circuit, and device
WO2016008399A1 (en) Flow control
US20080192633A1 (en) Apparatus and method for controlling data flow in communication system
JP2008166888A (ja) スイッチにおける優先度帯域制御方法
US8908510B2 (en) Communication link with intra-packet flow control
US7554908B2 (en) Techniques to manage flow control
US7408876B1 (en) Method and apparatus for providing quality of service across a switched backplane between egress queue managers
JP4652314B2 (ja) イーサoamスイッチ装置
CN102594669A (zh) 数据报文的处理方法、装置及设备
US7203171B1 (en) Ingress discard in output buffered switching devices
CN102223311A (zh) 一种队列调度方法及装置
WO2019232760A1 (zh) 一种数据交换方法、数据交换节点及数据中心网络
WO2023072112A1 (zh) 一种报文调度的方法及装置
US7599292B1 (en) Method and apparatus for providing quality of service across a switched backplane between egress and ingress queue managers
CN111434079B (zh) 一种数据通信方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22885989

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022885989

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022885989

Country of ref document: EP

Effective date: 20231220