CN116264567A - Message scheduling method, network equipment and computer readable storage medium - Google Patents

Message scheduling method, network equipment and computer readable storage medium Download PDF

Info

Publication number
CN116264567A
CN116264567A CN202111523786.1A CN202111523786A CN116264567A CN 116264567 A CN116264567 A CN 116264567A CN 202111523786 A CN202111523786 A CN 202111523786A CN 116264567 A CN116264567 A CN 116264567A
Authority
CN
China
Prior art keywords
deadline
parameter
message
queue
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111523786.1A
Other languages
Chinese (zh)
Inventor
彭少富
谭斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN202111523786.1A priority Critical patent/CN116264567A/en
Priority to PCT/CN2022/115920 priority patent/WO2023109188A1/en
Publication of CN116264567A publication Critical patent/CN116264567A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6255Queue scheduling characterised by scheduling criteria for service slots or service orders queue load conditions, e.g. longest queue first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders

Abstract

The invention provides a message scheduling method, network equipment and a computer readable storage medium. The message scheduling method comprises the following steps: acquiring a first message, and acquiring a corresponding first deadline parameter according to the first message; according to the first deadline parameter, caching the first message into a corresponding deadline queue, wherein the priority of the deadline queue corresponds to the first deadline parameter; circularly adjusting the priority of the deadline queue according to a preset time threshold; and under the condition that the priority of the deadline queue is the highest priority, scheduling and sending the first message. According to the scheme of the embodiment of the invention, delay jitter can be controlled, traffic micro-burst is prevented from happening, and the service quality of the network is improved.

Description

Message scheduling method, network equipment and computer readable storage medium
Technical Field
Embodiments of the present invention relate to, but are not limited to, the field of data communications, and in particular, to a packet scheduling method, a network device, and a computer readable storage medium.
Background
In the prior art, a network is formed by a plurality of nodes and links interconnecting the nodes, and in order for the network to function properly, functions required for the network need to be set from a management plane, a control plane and a forwarding plane, wherein the management plane relates to necessary configuration and management, the control plane relates to inter-node network protocol operation and corresponding forwarding table entry generation, and the forwarding plane relates to guiding message forwarding according to the corresponding forwarding table entry.
In a packet multiplexing based network, a node may receive or generate multiple messages and forward the messages to the same egress port, where the messages share the bandwidth resources of the egress port. In general, the forwarding plane may match some fields in the header to the corresponding priorities of the messages, and then buffer the messages into queues having the corresponding priorities, respectively. The scheduling engine of the forwarding plane sequentially schedules queues to send buffered messages based on priority from high to low. However, in the queue scheduling mode based on priority, delay jitter cannot be controlled, and the forwarding and accelerating action of a certain intermediate node can lead to the situation that a downstream node faces traffic micro-bursts, so that network congestion is caused, and the service quality of the network is affected.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the invention provides a message scheduling method, network equipment and a computer readable storage medium, which can control delay jitter, prevent traffic micro-burst and improve the service quality of a network.
In a first aspect, an embodiment of the present invention provides a method for scheduling a packet, including: acquiring a first message, and acquiring a corresponding first deadline parameter according to the first message; caching the first message into a corresponding deadline queue according to the first deadline parameter, wherein the priority of the deadline queue corresponds to the first deadline parameter; circularly adjusting the priority of the deadline queue according to a preset time threshold; and under the condition that the priority of the deadline queue is the highest priority, scheduling and sending the first message.
In a second aspect, an embodiment of the present invention further provides a network device, including: at least one processor; at least one memory for storing at least one program; the message scheduling method as described above is implemented when at least one of said programs is executed by at least one of said processors.
In a third aspect, an embodiment of the present invention further provides a computer readable storage medium storing computer executable instructions for performing the packet scheduling method as described above.
The embodiment of the invention comprises the following steps: acquiring a first message, and acquiring a corresponding first deadline parameter according to the first message; according to the first deadline parameter, caching the first message into a corresponding deadline queue, wherein the priority of the deadline queue corresponds to the first deadline parameter; circularly adjusting the priority of the deadline queue according to a preset time threshold; and under the condition that the priority of the deadline queue is the highest priority, scheduling and sending the first message. According to the scheme provided by the embodiment of the invention, a first message is firstly obtained, and a corresponding first deadline parameter is obtained according to the first message; then according to the first deadline parameter, caching the first message into a corresponding deadline queue, wherein the priority of the deadline queue corresponds to the first deadline parameter; then, according to a preset time threshold, circularly adjusting the priority of the deadline queue; and finally, under the condition that the priority of the deadline queue is the highest priority, scheduling and sending the first message.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate and do not limit the invention.
FIG. 1 is a flow chart of a message scheduling method according to an embodiment of the present invention;
FIG. 2 is a flow chart of message buffering provided by one embodiment of the present invention;
FIG. 3 is a flowchart illustrating a message buffering method according to an embodiment of the present invention;
FIG. 4 is a flowchart of another embodiment of the present invention for providing a message buffer;
FIG. 5 is a particular flow diagram of priority adjustment of a deadline queue provided by an embodiment of the invention;
FIG. 6 is a particular flow chart of priority adjustment of a deadline queue provided by another embodiment of the invention;
FIG. 7 is a specific flow chart for obtaining a first deadline parameter provided by an embodiment of the present invention;
FIG. 8 is a flowchart illustrating an embodiment of the present invention for obtaining a first deadline parameter;
FIG. 9 is a flowchart showing an embodiment of the present invention for obtaining a first deadline parameter;
FIG. 10 is a flowchart of a message scheduling method according to another embodiment of the present invention;
FIG. 11 is a schematic diagram of a deadline queue provided by an embodiment of the invention;
FIG. 12 is a schematic diagram of a deadline queue provided by another embodiment of the invention;
FIG. 13 is a diagram illustrating buffering of messages to a deadline queue according to an embodiment of the present invention;
FIG. 14 is a diagram illustrating buffering of messages to a deadline queue according to another embodiment of the present invention;
FIG. 15 is a schematic flow engineering path diagram provided by one embodiment of the present invention;
FIG. 16 is a schematic diagram of an interior gateway protocol path provided by one embodiment of the present invention;
fig. 17 is a schematic diagram of a configuration of a network device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The invention provides a message scheduling method, which comprises the following steps: acquiring a first message, and acquiring a corresponding first deadline parameter according to the first message; caching the first message into a corresponding deadline queue according to the first deadline parameter, wherein the priority of the deadline queue corresponds to the first deadline parameter; circularly adjusting the priority of the deadline queue according to a preset time threshold; and under the condition that the priority of the deadline queue is the highest priority, scheduling and sending the first message. According to the scheme provided by the embodiment of the invention, a first message is firstly obtained, and a corresponding first deadline parameter is obtained according to the first message; then according to the first deadline parameter, caching the first message into a corresponding deadline queue, wherein the priority of the deadline queue corresponds to the first deadline parameter; then, according to a preset time threshold, circularly adjusting the priority of the deadline queue; and finally, under the condition that the priority of the deadline queue is the highest priority, scheduling and sending the first message.
Embodiments of the present invention will be further described below with reference to the accompanying drawings.
As shown in fig. 1, fig. 1 is a flowchart of a message scheduling method according to an embodiment of the present invention. The message scheduling method includes, but is not limited to, step S100, step S200, step S300 and step S400:
step S100, a first message is obtained, and a corresponding first deadline parameter is obtained according to the first message;
step S200, caching the first message into a corresponding deadline queue according to a first deadline parameter, wherein the priority of the deadline queue corresponds to the first deadline parameter;
step S300, circularly adjusting the priority of the deadline queue according to a preset time threshold;
step S400, when the priority of the deadline queue is the highest priority, scheduling and sending the first message.
It should be noted that, first, a first message is acquired, and a corresponding first deadline parameter is obtained according to the first message; then according to the first deadline parameter, caching the first message into a corresponding deadline queue, wherein the priority of the deadline queue corresponds to the first deadline parameter; then, according to a preset time threshold, circularly adjusting the priority of the deadline queue; and finally, under the condition that the priority of the deadline queue is the highest priority, scheduling and sending the first message.
It should be noted that, the first message is merely for the description of the following embodiments, and the first message and the other messages should not be considered to have different message formats. The message is a data unit exchanged and transmitted in the network, namely a data block to be sent by the station at one time; the message contains complete data information to be sent, the lengths of the message are not consistent, and the length of the message is unlimited and variable; the message is also a unit of network transmission, and is continuously encapsulated into a form of a packet, a packet and a frame to be transmitted in the transmission process, wherein the encapsulation mode is to add some information segments.
It should be noted that, the execution body of the embodiment of the present invention may be a network node in a network system, and the network node may be a router, a mobile phone, a computer, a server, etc., and all network devices with a data forwarding function may be considered as the execution node of the embodiment of the present invention, which belongs to the protection scope of the present invention. It is noted that the first message may be sent by one network node to another network node or generated by the network node itself.
It can be appreciated that the network node can acquire a plurality of messages at a time, and each message corresponds to a first deadline parameter, and the first deadline parameter can be determined according to whether the information carried by the message is sensitive to time.
It should be noted that, the priority of the deadline queue corresponds to the first deadline parameter; for example, the priority of the deadline queue may be set to be the highest when the deadline queue is 0us, and the packets Wen Jiuhui buffered in the deadline queue are scheduled to be sent when the priority of the deadline queue is the highest, so that the packets can be forwarded according to the deadline parameter, the traffic micro-emergency situation can be prevented, and the service quality of the network is improved. It is noted that other messages may also continue to be buffered in the event that the priority of the deadline queue is not highest.
It should be noted that the scheduling engine may start a cycle timer to decrement the deadlines of the deadlines queues, i.e. each time the timer expires, the deadline values of all queues will be subtracted by the time interval of the timer. Note that the time interval of the timer must be greater than or equal to the authorized time of the deadline queue, and the priority of the deadline queue needs to be adjusted every time a preset time threshold is passed; the priority of a certain deadline queue is highest at the previous moment, and the priority of the deadline queue becomes lowest at the next moment. As shown in FIG. 11, queues queue-1 through queue-61 are the Deadeline queues, where a Deadeline queue is a Deadline queue and the other queues are conventional non-Deadeline queues. Each loadline queue has its Deadline attribute. The preset maximum cutoff time is 60us. At the initial time (time T0), the deadlines of all the loadline queues are staggered from each other, and illustratively, the Deadline of queue-1 is 60us, the Deadline of queue-2 is 59us, and the Deadline of queue-3 is 58us. Only the deadline of queue-61 is 0 at this time, which has the highest scheduling priority. Assuming that the scheduling engine starts a round-robin timer with a time threshold of 1us, the time interval of the timer is subtracted from the deadlines for all the lines after each timer timeout, as shown in the figure, at time T0+1us, the timer times out, the Deadline for queue-1 becomes 59us, the Deadline for queue-2 is 58us, and the Deadline for queue-3 is 57us. At this point, the deadline of queue-61 is restored to the maximum deadline of 60us and is no longer set to the highest scheduling priority; and the deadline of queue-60 becomes 0, which has the highest scheduling priority. The grant time of the loadline queue may be set to coincide with the time interval of the cycle timer, also 1us. That is, when the Deadline of a certain line queue becomes 0, it has a time limit of 1us to send a message in the queue, during which buffering of a new message will be prohibited, and after the lapse of 1us, the loop timer will timeout again, and the Deadline of another line queue will become 0. Of course, it is also entirely feasible to set the grant time to be smaller than the recurring timer interval.
Further, as shown in fig. 12, queues queue-1 to queue-61 are also lines queues, and the difference from fig. 11 is that the scheduling engine starts a cycle timer with a time threshold of 10us, and after each time the timer is timed out, the time interval of the timer is subtracted from the deadlines of all lines queues, as shown in the figure, at time t0+10us, the Deadline of queue-1 becomes 50us, the Deadline of queue-2 is 40us, and the Deadline of queue-3 is 30us. The network node may use different recurring timer intervals for different egress ports independently. The general principle is that if an egress port has a large bandwidth (e.g., 100 gbps), the time interval of the cycle timer (and the grant time of the readline queue) can be set to be small (e.g., 1 us), because a large bandwidth link can transmit a required bit amount even in a small time interval; if an egress port has a small bandwidth (e.g., 1 gbps), the time interval of the cycle timer (and grant time of the loadline queue) is large (e.g., 10 us), because a small bandwidth link requires a large amount of bits to transmit in a large time interval.
Notably, a certain egress port in a network node can maintain multiple queues, where the queues may include a deadline queue and a normal queue; the first message with the first deadline parameter can be buffered in the corresponding deadline queue according to the first deadline parameter, and the first message without the first deadline parameter can be buffered in the common queue. As shown in fig. 13, the node receives 6 messages successively from three ingress ports, where message 1, message 2, message 3, and message 5 have corresponding deadlines, and message 4 and message 6 are normal messages. These messages all need to be forwarded to the same egress port according to the forwarding table entry, and assume that after passing through the forwarding delay in the node by 5us, they reach the line card where the egress port is located at almost the same time, and the queue state of the egress port is shown in the figure. Message 1 is allowed to queue within the node with a latency of 30-5=25us, which will be buffered in the loadline queue-36 (with a Deadline of 25 us); similarly, messages 2, 3, 5 would be cached in the Deadline queue-46 (with a Deadline of 15 us), queue-41 (with a Deadline of 20 us), and queue-16 (with a Deadline of 45 us), respectively. Messages 4 and 6 are placed in the non-queue in a conventional manner.
In addition, in an embodiment, as shown in fig. 2, the deadline queue carries a second deadline parameter, and the step S200 may further include, but is not limited to, step S210.
Step S210, according to the first deadline parameter and the second deadline parameter, caching the first message into a corresponding deadline queue; the plurality of deadline queues are provided, and different deadline queues correspond to different second deadline parameters.
It should be noted that, according to the first deadline parameter corresponding to the first packet and the second deadline parameter carried by the deadline queue, the first packet is cached in the corresponding deadline queue.
It should be noted that, in the embodiment of the present invention, the deadline of the deadline queue is the second deadline parameter, and the deadline of the message is the first deadline parameter.
Notably, multiple queues can be maintained at a certain egress port of the same network node; the number of the deadline queues can be multiple, and the second deadline parameters corresponding to different deadline queues are different, so that only one deadline queue has the highest priority at each moment, and the message data in the corresponding deadline queue can be accurately and reliably scheduled. Illustratively, as shown in FIG. 11, the queues queue-1 through queue-61 are the Deadline queues, the Deadline of queue-1 is 60us, the Deadline of queue-2 is 59us, the Deadline of queue-3 is 58us, at this time, the Deadline of queue-61 is 0 only, the time threshold set by the cycle timer is 1us, thus, 1us is only after each adjustment period, the Deadline of only one Deadline queue is 0, i.e. only one Deadline queue has the highest priority.
In addition, in an embodiment, as shown in fig. 3, the step S210 may further include, but is not limited to, step S211 and step S212.
Step S211, performing difference processing on the first cut-off time parameter and a preset forwarding delay parameter to obtain a third cut-off time parameter;
step S212, the third deadline parameter and the second deadline parameter are matched, and the first message is cached into a deadline queue with the second deadline parameter identical to the third deadline parameter.
It should be noted that, first, difference processing is performed on the first deadline parameter and a preset forwarding delay parameter, so as to obtain a third deadline parameter; and then matching the third deadline parameter with the second deadline parameter, and finally caching the first message into a deadline queue with the second deadline parameter identical to the third deadline parameter so as to carry out subsequent dispatch and transmission processing.
Notably, the forwarding process of the message in the network node mainly comprises two parts: the first part is to inquire a forwarding table when a message is received from an input port and then deliver the message to a line card where an output port is located, the second part is to buffer the message in a queue of the output port to wait for transmission, and the two parts take time, wherein the first part can be called forwarding delay as a forwarding delay parameter, the second part can be called queuing delay as a third cut-off time parameter, and the delay (namely the first cut-off time parameter) of the message in a node is the sum of the forwarding delay and the queuing delay; wherein the forwarding delay is related to chip implementation and is generally constant; while queuing delay is unstable.
After the first Deadline parameter of the to-be-scheduled forwarding message is obtained, the to-be-scheduled forwarding message needs to be delivered to a readline queue with corresponding Deadline. At this time, it should be noted that, the first Deadline parameter of the to-be-scheduled forwarding message indicates the maximum residence time allowed by the message in the node, which actually includes two parts of forwarding delay and queuing delay, and the Deadline of the line queue is only related to the queuing delay, so when the to-be-scheduled forwarding message needs to be cached in the line queue, the forwarding delay parameter should be subtracted from the first Deadline parameter of the to-be-scheduled forwarding message, and the to-be-scheduled forwarding message is cached in the line queue consistent with the difference according to the obtained difference (i.e. the queuing delay allowed by the to-be-scheduled forwarding message in the node).
It should be noted that the forwarding delay of a packet in a node is generally a fixed value, so that the simplest method is to statically configure a uniform forwarding delay value for the node. Or, the node can dynamically acquire the time (time-1) when the message is received from the ingress port, and dynamically acquire the time (time-2) when the message reaches the line card where the egress port is located, and the time-1 is subtracted by the time-2 to obtain the forwarding delay of the message.
In addition, in an embodiment, as shown in fig. 4, the step S210 may further include, but is not limited to, step S213, step S214, and step S215.
Step S213, performing a difference processing on the first deadline parameter and a preset forwarding delay parameter to obtain a fourth deadline parameter;
step S214, performing a difference processing on the fourth deadline parameter and the time threshold to obtain a fifth deadline parameter;
step S215, the fifth deadline parameter and the second deadline parameter are matched, and the first message is cached into a deadline queue with the second deadline parameter identical to the fifth deadline parameter.
It should be noted that, first, performing a difference process on the first deadline parameter and a preset forwarding delay parameter to obtain a fourth deadline parameter; then, performing difference processing on the fourth deadline parameter and the time threshold value to obtain a fifth deadline parameter; and finally, carrying out matching processing on the fifth deadline parameter and the second deadline parameter, and caching the first message into a deadline queue of which the second deadline parameter is the same as the fifth deadline parameter.
It should be noted that the difference between the present embodiment and the above steps S211 and S212 is that after the first deadline parameter is subjected to the difference processing with the preset forwarding delay parameter, the difference processing with the time threshold is further required to be further performed, so as to obtain a fifth deadline parameter, then the fifth deadline parameter is matched with the second deadline parameter, and finally the first message is buffered in a deadline queue with the second deadline parameter being the same as the fifth deadline parameter.
It should be noted that, the scheduler selects a queue for sending messages, where buffered messages are dequeued sequentially from the queue and then sent to the outbound link, and still needs to consume a certain amount of time (denoted as u). Especially for a scheduling strategy pursuing low delay jitter, the actual residence time of the message at the node is as follows: the expected deadline of the message + u. So that the time limit requirements of the service are exceeded. Based on the problem, when a message enters a corresponding line queue according to the expected Deadline, the calculated message is deducted by a timer time interval on the basis of the allowed queuing time delay in the node, and then the message is cached into the line queue with the consistent value according to the updated value of the allowed queuing time delay of the message in the node; or when the Deadline of the loadline queue is about to be decremented to 0, that is, the time interval of the round of timer is decremented to 0, the scheduling priority of the queue is set to be the highest, the scheduling opportunity is immediately obtained, and the buffer memory is also allowed to buffer new messages, wherein the buffered messages are immediately sent to the output port; in addition, if the Deadline of the Deadline queue is reduced to 0, the Deadline of the Deadline queue is immediately restored to the preset maximum Deadline, and the operation of the next round of decreasing along with the time is continued.
Illustratively, as shown in FIG. 14, queues queue-1 through queue-60 are Deadeline queues, with the other queues being conventional non-Deadeline queues. Each loadline queue has its Deadline attribute. The preset maximum cutoff time is 60us. At the initial time (time T0), the deadlines of all the loadline queues are staggered, for example, 60us for queue-1, 59us for queue-2, 58us for queue-3, etc. At this point the deadline of queue-60 is 1, which has the highest scheduling priority. Assuming that the scheduler engine starts a round-robin timer with a time interval of 1us, the time interval of the timer is subtracted from the deadlines for all the lines after each timer timeout, as shown in the figure, at time T0+1us, the Deadline for queue-1 becomes 59us, the Deadline for queue-2 is 58us, the Deadline for queue-3 is 57us, etc. At this point, the deadline of queue-60 decrements to 0 but is immediately restored to the maximum deadline of 60us and is no longer set to the highest scheduling priority; and the deadline of queue-59 becomes 1, which has the highest scheduling priority. That is, queue-60 allows messages to be sent in the time T0 to T0+1us and queue-59 allows messages to be sent in the time T0+1us to T0+2us.
In addition, in an embodiment, as shown in fig. 5, the step S300 may further include, but is not limited to, step S310.
In step S310, when the priority of the deadline queue at the previous time is not the highest, the second deadline parameter and the time threshold are subjected to a difference process to update the second deadline parameter, so as to gradually increase the priority of the deadline queue.
It should be noted that, in the process of adjusting the priority of the previous deadline queue, if the priority of the deadline queue is not the highest, it is necessary to perform a difference processing between the second deadline parameter and the time threshold to update the second deadline parameter, and increase the priority of the deadline queue step by step.
In addition, in an embodiment, as shown in fig. 6, the step S300 may further include, but is not limited to, step S320.
In step S320, when the priority of the deadline queue at the previous moment is the highest, the second deadline parameter is updated to a preset initial deadline parameter, where the priority of the deadline queue corresponding to the initial deadline parameter is the lowest in all deadline queues.
It should be noted that, in the process of adjusting the priority of the previous deadline queue, if the priority of the deadline queue is highest, the second deadline parameter needs to be updated to the preset initial deadline parameter; and the priority of the deadline queue corresponding to the initial deadline parameter is lowest in all deadline queues.
Illustratively, as shown in FIG. 11, the queues queue-1 through queue-61 are the loadline queues. The preset maximum cutoff time is 60us. At the initial time (time T0), the deadlines of all the loadline queues are staggered from each other, and illustratively, the Deadline of queue-1 is 60us, the Deadline of queue-2 is 59us, and the Deadline of queue-3 is 58us. Only the deadline of queue-61 is 0 at this time, which has the highest scheduling priority. Assuming that the scheduling engine starts a round-robin timer with a time threshold of 1us, the time interval of the timer is subtracted from the deadlines for all the lines after each timer timeout, as shown in the figure, at time T0+1us, the timer times out, the Deadline for queue-1 becomes 59us, the Deadline for queue-2 is 58us, and the Deadline for queue-3 is 57us. At this point, the deadline of queue-61 is restored to the maximum deadline of 60us and is no longer set to the highest scheduling priority; and the deadline of queue-60 becomes 0, which has the highest scheduling priority.
In addition, in an embodiment, as shown in fig. 7, the first message carries a first deadline parameter, and the step S100 may further include, but is not limited to, step S110.
Step S110, a first deadline parameter is obtained from the first message.
It should be noted that, when the first packet carries the first deadline parameter, the first deadline parameter can be directly obtained from the first packet, and the obtaining process is simple, convenient and quick.
It should be noted that, for all traffic carried in the network, only a part of the traffic is time-sensitive traffic. These time-sensitive traffic flows may use the message scheduling method of the embodiments of the present invention. Illustratively, the deadline parameter for the message may be obtained by; when the ingress node in the network encapsulates the deterministic traffic flow, it can insert the deadline into the encapsulated message explicitly according to the traffic SLA (ServiceLevel Agreement, service class agreement), and the intermediate node can directly obtain the deadline from the message after receiving the message. In general, only a single cut-off time is needed to be inserted, and the method is applicable to all nodes along the path; or inserts a stack of a plurality of deadlines, one for each node. Note that the deadline herein is a time offset that indicates the maximum duration of residence of a message within a node, i.e., the maximum duration of time a message is allowed to reside within the node from the point at which the node receives the message from the ingress port or locally generates the message. It is not limited to which field or new field in the message is inserted with the deadline information, for example, a possible way is to insert the deadline information in the Hop-by-Hop (header by header) extension header of the IPv6 message or set in the Source Address field of the IPv6 message.
In addition, in an embodiment, as shown in fig. 8, the step S100 may further include, but is not limited to, step S120.
Step S120, searching and matching to a preset first routing table item according to the first message, and acquiring a first deadline parameter from forwarding information carried by the first routing table item.
It should be noted that, according to the first message, the first deadline parameter is searched and matched to a preset first routing table entry, and is obtained from forwarding information carried by the first routing table entry. Illustratively, each node in the network may maintain a deterministic routing table entry, and upon a message hitting the deterministic routing table entry, obtain a deadline from forwarding information contained in the routing table entry.
In addition, in an embodiment, as shown in fig. 9, the step S100 may further include, but is not limited to, step S130.
Step S130, searching and matching to a preset first strategy table according to the first message, and acquiring a first deadline parameter from information carried by the first strategy table.
It should be noted that, according to the first message, the first policy table entry is searched and matched, and the first deadline parameter is obtained from the information carried by the first policy table entry. Illustratively, local policies are configured on each node in the network, and corresponding deadlines are then set according to the matched specific message characteristics.
In addition, in an embodiment, as shown in fig. 10, step S500 may be further included after step S400 described above, but is not limited thereto.
Step S500, under the condition that the dispatching and sending of the deadline queue of the first message is finished and the preset authorized time is not finished, dispatching and sending processing is carried out on the message data cached in the port queue of the non-highest priority; wherein, the authorized time is used for representing the longest duration allowed by the dispatch and send message data; the port queues are all queues maintained by the same egress port.
It should be noted that, under the condition that the scheduling and sending of the deadline queue where the first message is located is finished and the preset authorized time is not finished, the deadline queue with the non-highest priority participates in the queue scheduling, and the cache message data in the deadline queue is scheduled and sent, so that the method is very suitable for the occasion of pursuing low-delay business requirements; and under the condition that the scheduling and the transmission of the deadline queue of the first message are finished and the preset authorized time is not finished, the deadline queue with the non-highest priority does not participate in the queue scheduling, and the transmission of the cached message data is forbidden, so that the method is very suitable for the business requirement occasion with low delay jitter. It should be noted that, all queues maintained by the same egress port include a deadline queue and a non-deadline queue; the non-highest priority deadline queues may opt-in or not to participate in the queue schedule according to a preset policy.
It is noted that the grant time, the preset time threshold of the timer, and the initial expiration time parameter may all be set according to the actual capabilities of the network.
In order to more clearly illustrate the flow of the message scheduling method provided by the embodiment of the present invention, a specific example is described below.
As shown in fig. 15, in the network, se:Sup>A traffic engineering path (TE path) is established, which passes through the path S-se:Sup>A-C-E-D, and has se:Sup>A deterministic delay of 160us. Each node along the path adopts a message schedule based on the Deadline to provide a deterministic delay target, and the patent refers to such a TE path as a delay-delay TE path. The delay parameters outside the nodes of each link are marked in the figure, for example, the minimum delay outside the nodes of the links between the nodes S and A in the figure is 20us. Then, of the delays 160us of the entire TE path, 60us is the accumulated link delay outside the node, and 100us is the accumulated intra-node delay. The accumulated intra-node delay will be evenly shared by the nodes S, A, C, E, i.e., the deadline of the message in each node is 25us.
The TE path described above may be instantiated as an SR policy, or SR tunnel, or RSVP-TE tunnel.
When the encapsulation message is forwarded along the TE path, the head node S may carry a single deadline of 25us in the message, which is used for scheduling queues based on deadlines of all nodes along the TE path. It is assumed that the forwarding delay per node is 5us. Since the TE path is used to provide a deterministic latency goal, nodes along the path will take a policy that the loadline queue whose Deadline has not been reduced to 0 does not participate in queue scheduling.
At the initial time T0, the node S encapsulates the message, then spends 5us forwarding the message to the line card where the egress port (link-S-A) is located, and then buffers the message into se:Sup>A buffer queue with se:Sup>A Deadline of 20 us. After 20us, the message is sent out from the output port (link-S-A), and the message reaches the node A after the link transmission delay of 20 us.
At time T0+45us, node A receives the message, then spends 5us forwarding the message to the line card where the egress port (link-A-C) is located, and then buffers the message into the Deadeline queue with a Deadline of 20 us. After 20us, the message is sent out from the output port (link-A-C), and the message reaches the node C after the transmission delay of the link is 10 us.
At time T0+80us, node C receives the message, then spends 5us forwarding the message to the line card where the egress port (link-C-E) is located, and then buffers the message into the loadline queue with a Deadline of 20 us. After 20us, the message is sent out from the output port (link-C-E), and the message reaches the node E after the link transmission delay of 10 us.
At time T0+115us, node E receives the message, then spends 5us forwarding the message to the line card where the egress port (link-E-D) is located, and then buffers the message into the loadline queue with a Deadline of 20 us. After 20us, the message is sent out from the output port (link-E-D), and the message reaches the node E after the link transmission delay of 20 us.
At time T0+160us, node D receives the message.
In another embodiment, as shown in FIG. 15, se:Sup>A traffic engineering path (TE path) is established that passes through the path S-A-C-E-D, the TE path having se:Sup>A deterministic upper bound of delay 160us. The difference compared with the first embodiment is that the actual message forwarding in this example may not reach 160us of delay. Since the TE path is used to provide a deterministic latency upper bound target, nodes along the path will take the policy that a loadline queue whose Deadline has not been reduced to 0 may participate in queue scheduling.
At the initial time T0, the node S encapsulates the message, then spends 5us forwarding the message to the line card where the egress port (link-S-A) is located, and then buffers the message into se:Sup>A buffer queue with se:Sup>A Deadline of 20 us. And then the message is sent out from the output port (link-S-A) after the time less than or equal to 20us, and the message reaches the node A after the link transmission delay of 20 us.
At a certain moment from T0+25us to T0+45us, node A receives the message, then spends 5us forwarding the message to the line card where the egress port (link-A-C) is located, and then buffers the message into the Deadeline queue with a Deadline of 20 us. And then the message is sent out from the output port (link-A-C) after the time less than or equal to 20us, and the message reaches the node C after the link transmission delay of 10 us.
At a certain moment from T0+40us to T0+80us, node C receives the message, then spends 5us forwarding the message to the line card where the egress port (link-C-E) is located, and then buffers the message into the readline queue with a Deadline of 20 us. And then the message is sent out from the output port (link-C-E) after the time less than or equal to 20us, and the message reaches the node E after the link transmission delay of 10 us.
At a certain moment from T0+55us to T0+115us, node E receives the message, then spends 5us forwarding the message to the line card where the egress port (link-E-D) is located, and then buffers the message into the readline queue with a Deadline of 20 us. And then the message is sent out from the output port (link-E-D) after the time less than or equal to 20us, and the message reaches the node E after the link transmission delay of 20 us.
At some point from t0+80us to t0+160us, node D receives the message.
As shown in fig. 16, the IGP (interior Gateway Protocols, interior gateway protocol) domain contains 5 nodes, each of which is a bi-directional link. The delay parameters outside the nodes of each link are shown in the figure, for example, the minimum delay outside the nodes of the links between the nodes R1 and R2 in the figure is 10us. Assume that links of all nodes in the network are configured with consistent delay message scheduling parameters, and the links have consistent intra-node delay and delay jitter properties, for example, the configured intra-node delay is 30us, and the intra-node delay jitter is 0. For the destination node R5, each node in the network will create a deterministic routing forwarding table entry to the destination node R5, and the egress port in the forwarding information contained will provide the message with deadline information, i.e., 30us. In addition, since the delay jitter is 0, each node may take a policy that the queue with the Deadline not reduced to 0 does not participate in the queue scheduling. The specific process is not described in detail.
In addition, as shown in fig. 17, an embodiment of the present invention further provides a network device 600, where the network device 600 includes: memory 620, processor 610, and computer programs stored on memory 620 and executable on processor 610.
The processor 610 and the memory 620 may be connected by a bus or other means.
It should be noted that, the network device 600 in this embodiment and the packet scheduling method in the foregoing embodiments belong to the same inventive concept, so that these embodiments have the same implementation principle and technical effects, and will not be described in detail herein.
The non-transitory software program and instructions required to implement the message scheduling method of the above embodiments are stored in the memory 620, and when executed by the processor 610, the message scheduling method of the above embodiments is performed, for example, the method steps S100 to S400 in fig. 1, the method step S210 in fig. 2, the method steps S211 to S212 in fig. 3, the method steps S213 to S215 in fig. 4, the method step S310 in fig. 5, the method step S320 in fig. 6, the method step S110 in fig. 7, the method step S120 in fig. 8, the method step S130 in fig. 9, and the method step S500 in fig. 10 described above are performed.
Furthermore, an embodiment of the present invention provides a computer readable storage medium storing computer executable instructions, where the computer executable instructions are executed by a processor 610, for example, by the processor 610 in the embodiment of the network device 600, where the processor 610 is enabled to perform the packet scheduling method in the embodiment, for example, the method steps S100 to S400 in fig. 1, the method steps S210 in fig. 2, the method steps S211 to S212 in fig. 3, the method steps S213 to S215 in fig. 4, the method step S310 in fig. 5, the method step S320 in fig. 6, the method step S110 in fig. 7, the method step S120 in fig. 8, the method step S130 in fig. 9, and the method step S500 in fig. 10 described above.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the above embodiment, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and these equivalent modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. A message scheduling method comprises the following steps:
acquiring a first message, and acquiring a corresponding first deadline parameter according to the first message;
caching the first message into a corresponding deadline queue according to the first deadline parameter, wherein the priority of the deadline queue corresponds to the first deadline parameter;
circularly adjusting the priority of the deadline queue according to a preset time threshold;
and under the condition that the priority of the deadline queue is the highest priority, scheduling and sending the first message.
2. The method for scheduling packets according to claim 1, wherein the deadline queue carries a second deadline parameter, and the buffering the first packets into the corresponding deadline queue according to the first deadline parameter includes:
According to the first deadline parameter and the second deadline parameter, caching the first message into a corresponding deadline queue; the number of the deadline queues is multiple, and different deadline queues correspond to different second deadline parameters.
3. The method for scheduling packets according to claim 2, wherein buffering the first packets into the corresponding deadline queues according to the first deadline parameter and the second deadline parameter comprises:
performing difference processing on the first cut-off time parameter and a preset forwarding delay parameter to obtain a third cut-off time parameter;
and carrying out matching processing on the third deadline parameter and the second deadline parameter, and caching the first message into the deadline queue with the second deadline parameter identical to the third deadline parameter.
4. The method for scheduling packets according to claim 2, wherein buffering the first packets into the corresponding deadline queues according to the first deadline parameter and the second deadline parameter comprises:
Performing difference processing on the first cut-off time parameter and a preset forwarding delay parameter to obtain a fourth cut-off time parameter;
performing difference processing on the fourth cutoff time parameter and the time threshold to obtain a fifth cutoff time parameter;
and carrying out matching processing on the fifth deadline parameter and the second deadline parameter, and caching the first message into the deadline queue with the second deadline parameter identical to the fifth deadline parameter.
5. The method for scheduling packets according to claim 2, wherein the performing, according to a preset time threshold, a cyclic adjustment on the priority of the deadline queue includes one of:
when the priority of the deadline queue is not the highest at the previous moment, performing difference processing on the second deadline parameter and the time threshold to update the second deadline parameter so as to gradually increase the priority of the deadline queue;
and under the condition that the priority of the deadline queue is highest at the previous moment, updating the second deadline parameter into a preset initial deadline parameter, wherein the priority of the deadline queue corresponding to the initial deadline parameter is lowest in all deadline queues.
6. The method for scheduling packets according to claim 1, wherein the first packet carries the first deadline parameter, and the obtaining the corresponding first deadline parameter according to the first packet includes:
and acquiring the first deadline parameter from the first message.
7. The method for scheduling packets according to claim 1, wherein the obtaining the corresponding first deadline parameter according to the first packet includes:
searching and matching a preset first routing table item according to the first message, and acquiring the first deadline parameter from forwarding information carried by the first routing table item;
or searching and matching a preset first policy table according to the first message, and acquiring the first deadline parameter from information carried by the first policy table.
8. The method for scheduling packets according to claim 1, wherein, when the priority of the deadline queue is the highest priority, the method further comprises:
under the condition that the dispatching and sending of the deadline queue of the first message is finished and the preset authorized time is not finished, dispatching and sending processing is carried out on the message data cached in the port queue of the non-highest priority;
The authorization time is used for representing the longest duration allowed by dispatching the sending message data; the port queues are all queues maintained by the same output port.
9. A network device, comprising:
at least one processor;
at least one memory for storing at least one program;
the message scheduling method according to any one of claims 1 to 8, when at least one of said programs is executed by at least one of said processors.
10. A computer readable storage medium storing computer executable instructions for performing the message scheduling method of any one of claims 1 to 8.
CN202111523786.1A 2021-12-14 2021-12-14 Message scheduling method, network equipment and computer readable storage medium Pending CN116264567A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111523786.1A CN116264567A (en) 2021-12-14 2021-12-14 Message scheduling method, network equipment and computer readable storage medium
PCT/CN2022/115920 WO2023109188A1 (en) 2021-12-14 2022-08-30 Message scheduling method, and network device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111523786.1A CN116264567A (en) 2021-12-14 2021-12-14 Message scheduling method, network equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116264567A true CN116264567A (en) 2023-06-16

Family

ID=86723455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111523786.1A Pending CN116264567A (en) 2021-12-14 2021-12-14 Message scheduling method, network equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN116264567A (en)
WO (1) WO2023109188A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140185628A1 (en) * 2012-12-28 2014-07-03 Broadcom Corporation Deadline aware queue management
CN106559354A (en) * 2015-09-28 2017-04-05 中兴通讯股份有限公司 A kind of method and device for preventing CPU packet congestions
US10715437B2 (en) * 2018-07-27 2020-07-14 Intel Corporation Deadline driven packet prioritization for IP networks
CN109088829B (en) * 2018-09-20 2022-09-20 南方科技大学 Data scheduling method, device, storage medium and equipment
CN113382442B (en) * 2020-03-09 2023-01-13 中国移动通信有限公司研究院 Message transmission method, device, network node and storage medium

Also Published As

Publication number Publication date
WO2023109188A1 (en) 2023-06-22

Similar Documents

Publication Publication Date Title
US6876952B1 (en) Methods and apparatus for maintaining queues
US8638664B2 (en) Shared weighted fair queuing (WFQ) shaper
US9185047B2 (en) Hierarchical profiled scheduling and shaping
US7613114B2 (en) Packet scheduling apparatus
US6839767B1 (en) Admission control for aggregate data flows based on a threshold adjusted according to the frequency of traffic congestion notification
US7161907B2 (en) System and method for dynamic rate flow control
US20210083970A1 (en) Packet Processing Method and Apparatus
WO2019157867A1 (en) Method for controlling traffic in packet network, and device
EP1345365A2 (en) Weighted fair queuing (WFQ) shaper
US7843825B2 (en) Method and system for packet rate shaping
WO2020104005A1 (en) Signalling of dejittering buffer capabilities for tsn integration
JP2009253768A (en) Packet relaying apparatus, packet relaying method, and packet relaying program
US20030099250A1 (en) Queue scheduling mechanism in a data packet transmission system
US8660001B2 (en) Method and apparatus for providing per-subscriber-aware-flow QoS
Dong et al. High-precision end-to-end latency guarantees using packet wash
CN116264567A (en) Message scheduling method, network equipment and computer readable storage medium
CN113765796B (en) Flow forwarding control method and device
CN114500520A (en) Data transmission method, device and communication node
WO2023130744A1 (en) Message scheduling method, network device, storage medium, and computer program product
US20070280685A1 (en) Method of Optimising Connection Set-Up Times Between Nodes in a Centrally Controlled Network
WO2024055675A1 (en) Message scheduling method, network device, storage medium, and computer program product
US11558310B2 (en) Low-latency delivery of in-band telemetry data
Liu et al. Deployment of Asynchronous Traffic Shapers in Data Center Networks
CN115865810A (en) Credit value flow scheduling system and method in time-sensitive network
CN115883457A (en) Communication method and routing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication