WO2024036476A1 - 一种报文转发方法及装置 - Google Patents

一种报文转发方法及装置 Download PDF

Info

Publication number
WO2024036476A1
WO2024036476A1 PCT/CN2022/112753 CN2022112753W WO2024036476A1 WO 2024036476 A1 WO2024036476 A1 WO 2024036476A1 CN 2022112753 W CN2022112753 W CN 2022112753W WO 2024036476 A1 WO2024036476 A1 WO 2024036476A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
scheduling
message
sequence
rate
Prior art date
Application number
PCT/CN2022/112753
Other languages
English (en)
French (fr)
Inventor
王玮
Original Assignee
新华三技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 新华三技术有限公司 filed Critical 新华三技术有限公司
Priority to PCT/CN2022/112753 priority Critical patent/WO2024036476A1/zh
Priority to CN202280002800.XA priority patent/CN117897936A/zh
Publication of WO2024036476A1 publication Critical patent/WO2024036476A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling

Definitions

  • the present application relates to the field of network communication technology, and in particular, to a message forwarding method and device.
  • a deterministic network refers to a network that provides deterministic service guarantee capabilities for the services it carries. It can guarantee deterministic delay, delay jitter, packet loss rate and other indicators of the service. Deterministic network technology is a new type of quality of service (Quality). of Service, QoS) guarantee technology.
  • QoS quality of service
  • deterministic networks can be implemented based on the Cyclic Specific Queuing and Forwarding (CSQF) mechanism, and the Software Defined Network (SDN) controller can plan the forwarding of deterministic business packets in the deterministic network. path, and specify the CSQF forwarding resources of each hop network device in the deterministic network, so that the network device forwards deterministic service messages according to the specified CSQF forwarding resources.
  • CQF Cyclic Specific Queuing and Forwarding
  • SDN Software Defined Network
  • the current forwarding technology of deterministic networks is not mature enough, and the transmission rate requirements of deterministic services span a wide range.
  • the minimum transmission rate requirement can be less than 100Mbps, while the maximum transmission rate requirement can be greater than 100Gbps.
  • the current CSQF mechanism It is impossible to forward deterministic service packets with a wide range of transmission rates.
  • the purpose of the embodiments of this application is to provide a message forwarding method and device to realize the forwarding of multiple deterministic service messages with a large transmission rate span.
  • the specific technical solutions are as follows:
  • embodiments of the present application provide a message forwarding method, the method is applied to a first network device, and the method includes:
  • the first queue sequence includes a first number of periodic consecutive schedules Queue, the first number is the ratio between the outbound interface rate of the first network device and the minimum rate of the inbound interface of the first network device, the outbound interface rate is used to forward the first message The rate of the outgoing interface;
  • embodiments of the present application provide a message forwarding device, the device is applied to a first network device, and the device includes:
  • a receiving module configured to receive the first message from the user-side device
  • a caching module configured to cache the first message into a first scheduling queue corresponding to the deterministic flow to which the first message belongs in a first queue sequence, where the first queue sequence includes a first number of Periodic continuous scheduling queue, the first number is the ratio between the outbound interface rate of the first network device and the minimum rate of the inbound interface of the first network device, the outbound interface rate is used to forward the The rate of the outgoing interface of the first packet;
  • a forwarding module configured to forward messages in the first scheduling queue to the second network device according to the scheduling cycle of the first scheduling queue.
  • embodiments of the present application provide a network device, where the network device includes:
  • a machine-readable storage medium that stores machine-executable instructions that can be executed by the processor; the machine-executable instructions cause the processor to perform the following steps:
  • the first queue sequence includes a first number of periodic consecutive schedules Queue, the first number is the ratio between the outbound interface rate of the first network device and the minimum rate of the inbound interface of the first network device, the outbound interface rate is used to forward the first message The rate of the outgoing interface;
  • the transceiver forwards the message in the first scheduling queue to the second network device according to the scheduling cycle of the first scheduling queue.
  • embodiments of the present application provide a machine-readable storage medium that stores machine-executable instructions. When called and executed by a processor, the machine-executable instructions prompt the processor to: implement the first step above. method steps described in this aspect.
  • embodiments of the present application provide a computer program product, which causes the processor to: implement the method steps described in the first aspect.
  • the first network device can cache the first message into the first schedule corresponding to the deterministic flow to which the first message belongs in the first queue sequence. queue, and forward the message in the first scheduling queue to the second network device according to the scheduling cycle of the first scheduling queue.
  • the first queue sequence includes a first number of consecutive scheduling queues, the first number is the ratio of the rate of the outbound interface of the first network device used to forward the first message and the minimum rate of the inbound interface of the first network device, that is, scheduling.
  • the number of queues depends on the minimum rate of the incoming interface. This allows each incoming interface of the first network device to simultaneously receive packets of each deterministic flow and have a corresponding scheduling queue to cache them.
  • each deterministic flow can be cached in the scheduling queue corresponding to the deterministic flow and sent out in the scheduling cycle corresponding to the respective scheduling queue. , it can achieve deterministic transmission of each deterministic flow when the transmission rate span of the deterministic flow is large, which is more in line with the needs of deterministic services.
  • Figure 1 is a schematic diagram of a deterministic flow forwarding mechanism provided by an embodiment of the present application
  • Figure 2 is a schematic diagram of the network architecture of a deterministic network provided by an embodiment of the present application
  • Figure 3 is a schematic flow chart of a message forwarding method provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of deterministic stream transmission provided by an embodiment of the present application.
  • Figure 5 is a schematic flow chart of another message forwarding method provided by an embodiment of the present application.
  • Figure 6 is a schematic structural diagram of a message forwarding device provided by an embodiment of the present application.
  • Figure 7 is a schematic structural diagram of a network device provided by an embodiment of the present application.
  • Each network device in the deterministic network divides each cycle T into multiple small cycles of the same and continuous duration. For example, T is divided into 4 small cycles, namely Cycle 0, Cycle 1, Cycle 2 and Cycle 3. Specific deterministic flows are forwarded only within a specified small period. For example, deterministic flow 0 is forwarded in Cycle 0 in each cycle T, deterministic flow 1 is forwarded in Cycle 1 in each cycle T, deterministic flow 2 is forwarded in Cycle 2 in each cycle T, and deterministic flow 1 is forwarded in Cycle 1 in each cycle T. 3 Cycle 3 in each cycle T is forwarded, so that the delay jitter of the network device is limited to T, thereby achieving bounded delay jitter.
  • the jitter of each network device on the forwarding path of a deterministic flow will not increase the delay jitter of the next network device after the network device. That is, the jitter has nothing to do with the number of network devices included in the forwarding path. However, the increase in the number of network devices will increase the total delay of packets on the forwarding path.
  • each network device After receiving the message, each network device sends the message within the preset period corresponding to the deterministic flow to which the message belongs. That is, each network device is pre-configured with a mapping relationship between the deterministic flow and the period.
  • X device sends a packet in period 0, and the packet is transmitted on the link between X device and Y device.
  • the Y device After receiving the message, the Y device sends the message in cycle 2, and then the message is transmitted on the link between the Y device and the Z device.
  • the Z device After receiving the message, the Z device sends the message within its own cycle 1, and then the message is transmitted on the link between the Z device and the W device, and W can receive the message.
  • Figure 1 exemplarily shows a situation where the clocks between network devices are slightly different.
  • Figure 2 is a schematic diagram of the network architecture of a deterministic network.
  • HMI Human Machine Interface
  • robot mechanical equipment
  • P Provider equipment
  • the HMI and robotic in Figure 2 are both user-side devices.
  • Figure 2 exemplarily shows PE1, PE2, and P1 to P4, and the number of each device in actual implementation is not limited to this.
  • PE equipment is used to implement packet forwarding between user-side equipment and network-side equipment in a deterministic network.
  • the SDN controller can pre-plan the SRv6 forwarding path for the PE device for the packets entering the deterministic network from the user-side device, and plan forwarding resources for each hop of the network device in the forwarding path, so that each network device forwards deterministically according to the designated resources. Sex messages.
  • the packets entering the deterministic network from the user-side device refer to the IP packets with deterministic business requirements encapsulated through Ethernet and sent by the user-side device, that is, the IP packets of deterministic flow.
  • Deterministic flows are delay-sensitive service flows, while non-deterministic flows are non-delay-sensitive service flows.
  • non-deterministic flows can be flows that are suitable for the best effort (Best Effort) forwarding policy.
  • the PE device has a dedicated user-side interface for deterministic flows, which is not mixed with non-deterministic flows. If the user-side interface needs to be used by a mixture of deterministic flows and non-deterministic flows, the deterministic flows and non-deterministic flows can be distinguished through Time Sensitive Network (TSN) technology.
  • TSN Time Sensitive Network
  • the PE device After the PE device receives the message sent by the user-side device through the user-side interface, the PE device forwards the message to the next network device through the network-side interface according to the SRv6 forwarding path planned by SDN and the specified scheduling period, so that the message Transport in deterministic networks.
  • SRv6 messages encapsulated through Ethernet with time synchronization mechanism are transmitted in the deterministic network. That is to say, after the PE device receives the message sent by the user-side device, it will encapsulate the message into an SRv6 message. , and the time of each network device in the deterministic network is synchronized.
  • the network-side interface of the PE device can also receive SRv6 packets forwarded by the P device, and transmit the received SRv6 packets to the destination user-side interface according to the SRv6 forwarding path, and transmit the received SRv6 packets to the user-side interface according to the first-come-first-out scheduling mechanism.
  • the message is sent to the user-side device.
  • the P device is mainly responsible for forwarding packets between network-side devices.
  • the incoming interface of the P device forwards packets according to the SRv6 forwarding path of the received packets and uses a pre-specified scheduling cycle to forward the packets.
  • the first network device may be a PE device in the deterministic network, as shown in Figure 3 As shown, the method includes:
  • the first queue sequence includes a first number of periodic continuous scheduling queues, the first number is the ratio between the outbound interface rate of the first network device and the minimum rate of the inbound interface of the first network device, and the outbound interface rate is The rate of the outbound interface that forwards the first packet.
  • the period corresponding to the first queue sequence may be T, and the scheduling period of each scheduling queue included in the first queue sequence is a small period included in the period T.
  • the outgoing interface of the first network device refers to the network-side outgoing interface of the first network device
  • the incoming interface of the first network device refers to the user-side incoming interface of the first network device, that is, the first network device receives the data from the user-side incoming interface.
  • Messages from user-side devices can be forwarded through the network-side outbound interface.
  • the first network device may have multiple inbound interfaces, and the access rate of each inbound interface may be different.
  • the minimum rate of the inbound interface is the minimum rate among the access rates of the multiple inbound interfaces of the first network.
  • the first network device is a device in a deterministic network.
  • the deterministic flow forwarding mechanism of the deterministic network can be used to send randomly arriving messages of each deterministic flow in a fixed scheduling period. Therefore, the number of scheduling queues in the first queue sequence needs to be able to meet the following conditions: when all the messages of the deterministic flow that need to be forwarded by the first network device arrive at the same time, the number of scheduling queues in the first queue sequence needs to be used to cache each deterministic flow.
  • the scheduling queue of the flow packets ensures that the packets of each deterministic flow can be cached in a scheduling queue respectively. In order to satisfy the above conditions, the number of scheduling queues is the ratio of the outbound interface rate of the first network device to the minimum rate of the inbound interface of the first network device.
  • the four line segments in Figure 4 respectively represent the scheduling resources of the outbound interfaces with four rates. Between every two dots in Figure 4 is a queue sequence, and each diamond represents a scheduling sequence.
  • deterministic flow 1 Assume that the transmission rates of deterministic flow 1, deterministic flow 2 and other deterministic flows (not shown in Figure 4) are all 100Mbps. Each deterministic flow includes packets that are sent continuously, and the length of each packet is is 1.5KB. If the packets of these deterministic flows arrive at the PE device at the same time, the PE device needs to reserve a scheduling queue for each deterministic flow.
  • messages of deterministic flow 1 can be cached in the first scheduling queue of each queue sequence, and messages of deterministic flow 2 can be cached in the second scheduling queue of each queue sequence.
  • each scheduling queue is 1.5KB, then for an outbound interface with a rate of 1GE, the scheduling cycle of each scheduling queue is 15us; for an outbound interface with a rate of 10GE, 100GE, and 1T, the scheduling cycle of each scheduling queue They are 1.5us, 150ns, and 15ns respectively.
  • the first queue sequence includes a first number of consecutive scheduling queues, each scheduling queue corresponds to a scheduling cycle, and the first network device forwards the first network device to the second network device according to the scheduling cycle of each scheduling queue. Buffered messages in the queue sequence.
  • the second network device is the next hop device connected to the first network device in the deterministic network. For example, if the forwarding path of the first message in the deterministic network is PE1-P1-P2-PE2, then the first network device can is PE1, and the second network device is P1.
  • the first network device can cache the first message into the first queue sequence corresponding to the deterministic flow to which the first message belongs. in the scheduling queue, and forwards the message in the first scheduling queue to the second network device according to the scheduling cycle of the first scheduling queue. Because the first queue sequence includes a first number of periodic continuous scheduling queues, the first number is the ratio of the rate of the outbound interface of the first network device used to forward the first message and the minimum rate of the inbound interface of the first network device, that is, The number of scheduling queues depends on the minimum rate of the incoming interface.
  • each incoming interface of the first network device to simultaneously receive packets of each deterministic flow and have a corresponding scheduling queue to cache them.
  • the messages of each deterministic flow can be cached in the scheduling queue corresponding to the deterministic flow and sent out in the scheduling cycle corresponding to the respective scheduling queue. , it can achieve deterministic transmission of each deterministic flow when the transmission rate span of the deterministic flow is large, which is more in line with the needs of deterministic services.
  • each scheduling queue in the first queue sequence is configured as at least one Maximum Transmission Unit (MTU), and the size of each MTU is 1.5KB.
  • MTU Maximum Transmission Unit
  • the packet length span of deterministic services is large, for example, the packet length span can be from 64B to 1.5KB, there is a problem that the deterministic flow forwarding mechanism of the deterministic network cannot support the above packet length span.
  • the size of the scheduling queue needs to be at least able to meet the MTU size of the deterministic service, so as to ensure that the first network device has the deterministic forwarding capability for realizing the deterministic service.
  • the size of each scheduling queue is at least 1.5KB, that is, the scheduling queue can accommodate messages of any length in the range of 64B to 1.5KB, and can realize the processing of multiple messages with a large span of message lengths. Caching and forwarding.
  • the worst case bandwidth utilization when forwarding packets in the scheduling queue is about 50%. For example, if a 751B packet and a 750B packet are received continuously for a deterministic flow, then after a scheduling queue caches the first 751B packet, the remaining space is not enough to cache the 750B packet. , then only this 751B message in the scheduling queue is sent during the scheduling cycle of the scheduling queue, and the bandwidth utilization rate is about 50%.
  • each scheduling queue is configured with 2 MTUs, the worst bandwidth utilization is about 66%; if each scheduling queue is configured with 3 MTUs, the worst bandwidth utilization is about 75%. .
  • each scheduling queue can be configured with 1 MTU. For requirements that increase bandwidth utilization to 66%, each scheduling queue can be configured with 2 MTUs. For requirements that increase bandwidth utilization to 77%, each scheduling queue can be configured with 3 MTUs.
  • each scheduling queue is configured with 1 MTU
  • the outbound interface rate of the first network device is 10Gbps
  • the outbound interface rate of the first network device is 100Gbps
  • the outbound interface rate of the first network device is 1Tbps
  • the first queue sequence includes the number of scheduling queues and the length information of the first queue sequence. As shown in table 2.
  • the scheduling cycle duration of the first queue sequence is the ratio of the length of the first queue sequence to the outbound interface rate. Since each scheduling queue included in the first queue sequence has the same length, the scheduling cycle length of each scheduling queue included in the first queue sequence is: the scheduling cycle length of the first scheduling queue and the number of scheduling queues included in the first scheduling queue. ratio, and the scheduling cycle duration of each scheduling queue included in the first queue sequence is the same.
  • the scheduling cycle duration T of the first queue sequence can be understood as the duration required for all messages in the first queue sequence to be forwarded.
  • the first queue sequence period T frame length * 8 * number of MTUs included in the first queue sequence / outbound interface rate.
  • Each frame includes MTU, Internet Protocol Version 4 (IPv4) header Or Internet Protocol Version 6 (IPv6) header, Ethernet destination MAC (Destination MAC, DMAC) address, Ethernet source MAC (Source MAC, SMAC) address, type (type), Ethernet MAC cyclic redundancy Check code (Ethernet MAC Cyclic Redundancy Check, Ethernet MAC CRC), byte frame gap and preamble.
  • IPv4 Internet Protocol Version 4
  • IPv6 Internet Protocol Version 6
  • frame length 1.5KB (length of MTU) + 20 bytes (length of IPv4 header) or 40 bytes (length of IPv6 header) + 14 bytes (6-byte DMAC + 6-byte SMAC + 2 Byte Type) + 4 bytes (Ethernet MAC CRC length) + 12 bytes (byte frame gap) + 8 bytes (preamble length).
  • each scheduling queue configured as 1 MTU as an example, when the minimum rate of the incoming interface of the first network device is 100Mbps, the length of the first queue sequence corresponding to each outgoing interface rate and the scheduling period of the first queue sequence as shown in Table 3.
  • the first queue sequence period (1.5*1024+40+14+4+12+8)*8*10/1G ⁇ 126.56us.
  • each scheduling queue configured as 1 MTU as an example, when the minimum rate of the incoming interface of the first network device is 1000Mbps, the length of the first queue sequence corresponding to each outgoing interface rate and the scheduling period of the first queue sequence As shown in Table 4.
  • each scheduling queue configured with 2 MTUs as an example, when the minimum rate of the incoming interface of the first network device is 100Mbps, the length of the first queue sequence corresponding to each outgoing interface rate and the scheduling period of the first queue sequence As shown in Table 5.
  • each scheduling queue configured with 2 MTUs as an example, when the minimum rate of the incoming interface of the first network device is 1000Mbps, the length of the first queue sequence corresponding to each outgoing interface rate and the scheduling period of the first queue sequence As shown in Table 6.
  • the scheduling queue in the first queue sequence needs to be reserved in advance for each deterministic flow.
  • the sending rates of different deterministic flows that the PE device is responsible for forwarding can be different. The faster the sending rate, the greater the buffer space required for the scheduling queue.
  • a designated scheduling queue in the first queue sequence is used to cache the packets of the deterministic flow.
  • the sending rate of the deterministic flow is less than or equal to the minimum rate of the ingress interface, it means that the sending rate of the deterministic flow is relatively slow.
  • Using a scheduling queue is enough to cache the messages of the deterministic flow, so a designated schedule is allocated for the deterministic flow. Queuing to improve resource utilization.
  • the deterministic flow corresponds to a designated scheduling queue in the first queue sequence.
  • the above S302 can be implemented as:
  • the scheduling queues with a second number of consecutive periods in the first queue sequence are used to cache messages of the deterministic flow.
  • the second quantity is the ratio of the sending rate to the minimum rate of the incoming interface, rounded up.
  • the sending rate of the deterministic flow is greater than the minimum rate of the ingress interface, it means that the sending rate of the deterministic flow is faster.
  • Using one scheduling queue is not enough to cache the messages of the deterministic flow, so allocate more than one scheduling queue to the scheduling queue. queue to prevent packets of this deterministic flow from being lost and affecting services.
  • the deterministic flow corresponds to the two scheduling queues in the first queue sequence.
  • the first message is cached in one of the scheduling queues with a second number of consecutive periods corresponding to the deterministic flow in the first queue sequence.
  • the first network device when the first network device receives the message of the deterministic flow, it needs to cache the received message in the second number of continuous scheduling queues in the order of the received message. Scheduling queues with two consecutive cycles will be occupied in sequence. When the first network device receives the first message, it will cache the second message into a schedule that is not fully occupied yet and has enough remaining space to cache the first message in a scheduling queue with a second number of consecutive cycles. in queue.
  • deterministic flows with lower and higher sending rates correspond to a number of scheduling queues matching the sending rate. This ensures that the flow with lower sending rate is The deterministic forwarding of deterministic flows also ensures the deterministic forwarding of deterministic flows with higher sending rates, which can meet the deterministic business requirements of various sending rates.
  • the transmission of deterministic streams may also involve micro-bursts.
  • Micro-bursts refer to situations where a very large amount of burst data is received in a short period of time, and the instantaneous burst rate far exceeds the average rate.
  • the bandwidth of the incoming interface of the first network device is 100Mbps and the outgoing interface rate is 1GE, microbursts may occur in this case, and the instantaneous rate of deterministic flow packets that continuously arrive at the incoming interface may exceed 100Mbps, and there is not enough space to cache these packets in the first queue sequence, which will result in packet loss and thus cannot guarantee the deterministic delay of the deterministic flow.
  • multiple queue sequences can be set in the first network device in the embodiment of the present application to cache micro-burst messages.
  • the messages in each queue sequence need to be sent in sequence, so the more the number of queue sequences, the more micro-burst messages that can be cached, but the delay of micro-burst messages will also be greater, which will affect the determination of Delay jitter of deterministic flows.
  • two queue sequences can be set, namely a first queue sequence and a second queue sequence. For packets that arrive at the incoming interface in advance, the first network device may cache the packets in the scheduling queue corresponding to the deterministic flow to which the packets belong in the second queue sequence.
  • the method includes:
  • S501 is the same as S301. Please refer to the relevant description in S301.
  • S502. Determine whether the remaining buffer space of the first scheduling queue is less than the length of the first message.
  • S503-S504 are the same as S302-S303. Please refer to the relevant descriptions in S302-S303.
  • the second queue sequence includes a first number of scheduling queues with continuous cycles, the first queue sequence and the second queue sequence have continuous cycles, and the information of the scheduling queues included in the first queue sequence and the second queue sequence is the same.
  • each scheduling queue in the second queue sequence is configured with at least one MTU, and the size of each MTU is 1.5KB.
  • the first queue sequence and the second queue sequence include the same number of scheduling queues, and the size and scheduling cycle length of the scheduling queues included in the first queue sequence and the second queue sequence are the same.
  • the scheduling cycle duration of the first queue sequence and the second queue sequence is the same.
  • the scheduling sequence at the same position in the first queue sequence and the second queue sequence is used to cache messages of the same deterministic flow.
  • the first network device After the first network device receives the first message, if the remaining buffer space of the first scheduling queue is less than the length of the first message, it means that a micro-burst situation currently occurs, and the first network device can send the first message to Cached to the second dispatch queue in the second queue sequence.
  • the position of the second scheduling queue in the second queue sequence is the same as the position of the first scheduling queue in the first queue sequence.
  • the scheduling period duration of the first queue sequence and the second queue sequence is both T.
  • the first network device first forwards the message in the first queue sequence to the second network device in the period T of the first queue sequence, and then in the second queue sequence, the first network device forwards the message in the first queue sequence to the second network device. In the period T of the second queue sequence, forward the message in the second queue sequence to the second network device.
  • the total duration of the scheduling cycle of the first queue sequence and the second queue sequence is 2T.
  • the scheduling queue in the second queue sequence can be used to cache the burst messages, which can alleviate the micro-burst. Posted question.
  • the embodiments of the present application also need to consider the requirement of delay jitter.
  • the maximum delay jitter is the cycle length of two queue sequences, that is, 2T.
  • the period of one queue sequence is T ⁇ 150us
  • the period of two queue sequences is 2T ⁇ 300us, that is, the maximum delay jitter is 300us.
  • the period of one queue sequence is T ⁇ 300us
  • the period of two queue sequences is 2T ⁇ 600us, that is, the maximum delay jitter is 600us.
  • multiple subsequences can also be set for the first queue, and each subsequence includes multiple Dispatch queue.
  • scheduling queues in multiple subsequences can be assigned to the deterministic flow in advance, thereby increasing the sending frequency of the deterministic flow and reducing delay jitter.
  • the scheduling period of each subsequence in the first queue sequence can be 30us.
  • the scheduling period of a single sequence cannot be too short, otherwise it will be impossible to send an MTU message.
  • the sacrifice of bandwidth utilization or the sacrifice of delay jitter can be balanced.
  • the number of sequences included in each queue sequence can be set to not exceed 10, and when the queue sequence includes sub-sequences, it can be achieved.
  • the minimum rate of the incoming interface is 100M, and the minimum rate of the outgoing interface is 10GE.
  • the second queue sequence may also include multiple subsequences.
  • the sending rate of the deterministic flow is less than or equal to the minimum rate of the ingress interface, then a designated scheduling queue in each subsequence is used to cache the messages of the deterministic flow.
  • the above S302 can be implemented as:
  • Each subsequence in the first queue sequence includes a designated scheduling queue corresponding to the deterministic flow.
  • the packets of this deterministic flow will occupy the designated scheduling queues in each subsequence in the first queue sequence in turn. For example, if the first queue sequence includes 5 subsequences, the designated scheduling queues in the first three subsequences are currently occupied. If it is fully occupied, the first message can be cached in the designated scheduling queue in the fourth subsequence.
  • the second number of consecutive scheduling queues in each subsequence is used to cache the messages of the deterministic flow.
  • the second number is the sum of the sending rate and the incoming interface. The ratio of the interface's minimum rate to the nearest integer.
  • the first message is cached in a scheduling queue in a sub-sequence of the first queue sequence and in a scheduling queue with a second number of consecutive cycles corresponding to the deterministic flow.
  • Each subsequence in the first queue includes a second number of consecutive periodic scheduling queues corresponding to the deterministic flow.
  • the second quantity is the ratio of the sending rate to the minimum rate of the incoming interface, rounded up.
  • the first network device when the first network device receives the message of the deterministic flow, it will sequentially cache the message in the plurality of sub-queues allocated for the deterministic flow in the first queue sequence according to the order of the received message. In the sequence, the packets of the deterministic flow will sequentially occupy the scheduling queues corresponding to the deterministic flow in multiple sub-sequences.
  • the first queue sequence includes 5 subsequences, and two consecutive scheduling queues at the same position in each subsequence have been assigned to the deterministic flow, it is assumed that the two consecutive scheduling queues in the first two subsequences are currently fully occupied. , the first scheduling queue allocated for the deterministic flow in the third subsequence has been fully occupied, and the second scheduling queue has not been fully occupied yet, then the first message is cached in the third subsequence. The deterministic flow is allocated to the second scheduling queue of 2 consecutive periodic scheduling queues.
  • the message sending frequency of the deterministic flow ranges from a few us to a few ms.
  • the sending frequency span is large.
  • by setting a sub-sequence in the queue sequence it can be realized according to Requirements for sending frequency of messages of deterministic flows, and assign scheduling queues for deterministic flows.
  • the scheduling queue in each subsequence can be allocated to the deterministic flow; for a deterministic flow with low requirements on delay jitter, the deterministic flow can be Allocate the dispatch queue in several subsequences at fixed intervals, or even allocate only one dispatch sequence in the entire queue sequence for this deterministic flow. In this way, the requirement of a wide frequency span for sending deterministic flow messages can be met.
  • each scheduling queue is configured with 1 MTU, and the minimum rate of the incoming interface of the first network device is 100Mbps
  • the queue sequence period corresponding to each outgoing interface rate, and each queue sequence includes The number of subsequences, minimum delay jitter, and number of queue sequences are shown in Table 7.
  • each scheduling queue is configured with 2 MTUs, and the minimum rate of the incoming interface of the first network device is 100Mbps
  • the queue sequence period corresponding to each outgoing interface rate, the number of subsequences included in each queue sequence, and the minimum The delay jitter and number of queue sequences are shown in Table 8.
  • each scheduling queue included in the first queue sequence may be a physical queue.
  • each scheduling queue can also be a virtual queue.
  • each sub-sequence included in the first queue sequence is a physical queue in the first network device, and each scheduling queue included in each sub-sequence is a virtual queue.
  • each virtual queue in the embodiment of this application may be a traffic shaping leaky bucket queue. That is, the leaky bucket algorithm of Credit-based Traffic Shaping can be used to implement each virtual queue, and multiple traffic shaping leaky bucket queues share one physical queue.
  • each scheduling queue is configured with 1 MTU, and the minimum rate of the inbound interface of the first network device is 100Mbps
  • each scheduling queue is configured with 2 MTUs, and the minimum rate of the inbound interface of the first network device is 100Mbps
  • the queue sequence period corresponding to each outbound interface rate, and the number of subsequences included in each queue sequence the minimum delay jitter, the number of queue sequences, the number of leaky buckets included in each queue sequence, the number of leaky buckets included in each sub-sequence, the total number of leaky buckets included in the two queue sequences and the total buffer demand are as shown in Table 10 shown.
  • the first network device in the embodiment of this application may be a PE device, and the second network device may be a P device.
  • the P device receives the message forwarded by the PE device based on the deterministic flow forwarding mechanism in the embodiment of this application, it does not need to
  • the complex queue introduced in the above embodiment is used to forward the message.
  • the incoming interface of the P device receives the message, it can be sent out on the outgoing interface after being offset by a few time slots. It only needs to meet the maximum delay of the deterministic flow. Require. In other words, the P device can use a higher rate and a smaller buffer to forward packets.
  • the P device can cache received packets according to a cycle T.
  • the outbound interface rate of the P device is high, it can even cache received packets using a cycle T/2.
  • the outbound interface rate is 1T and the maximum delay requirement is 150us
  • a 15MB Buffer is required.
  • the demand for Buffer is controllable.
  • the forwarding mechanism of deterministic flows in the deterministic network can meet the various access rates (such as from 100Mbps to 100Gbps) required for deterministic services on the basis of ensuring the realizability of the system and chips.
  • Various message sending frequencies for example, from microsecond level to second level
  • various message lengths for example, from 64B to 1.5KB
  • embodiments of the present application also provide a message forwarding device, which is applied to the first network device.
  • the device includes:
  • the receiving module 601 is used to receive the first message from the user-side device
  • the caching module 602 is configured to cache the first message into the first scheduling queue corresponding to the deterministic flow to which the first message belongs in the first queue sequence, where the first queue sequence includes a first number of consecutive schedules.
  • the first number is the ratio between the outbound interface rate of the first network device and the minimum rate of the inbound interface of the first network device, and the outbound interface rate is the rate of the outbound interface used to forward the first packet;
  • the forwarding module 603 is configured to forward the packets in the first scheduling queue to the second network device according to the scheduling cycle of the first scheduling queue.
  • each scheduling queue in the first queue sequence is configured with at least one maximum transmission unit MTU, and the size of each MTU is 1.5KB.
  • the cache module 602 is specifically used to:
  • the cache module 602 is specifically used to:
  • the first message into one of the second number of continuous scheduling queues corresponding to the deterministic flow in the first queue sequence; the second number is the ratio of the sending rate to the minimum rate of the incoming interface, rounded up. value after.
  • the first queue sequence includes multiple subsequences
  • the cache module 602 is specifically used to:
  • each subsequence in the first queue sequence includes a designated scheduling queue corresponding to the deterministic flow ;
  • the cache module 602 is specifically used to:
  • each sub-sequence in the first queue includes The second number of consecutive periodic scheduling queues corresponding to the deterministic flow.
  • the second number is the ratio of the sending rate to the minimum rate of the incoming interface, rounded up.
  • the minimum rate of the incoming interface is 100M; the minimum rate of the outgoing interface is 10GE.
  • the scheduling cycle duration of the first queue sequence is the ratio of the length of the first queue sequence to the outbound interface rate, and the scheduling cycle duration of each scheduling queue included in the first queue sequence is the same.
  • each subsequence included in the first queue sequence is a physical queue in the first network device
  • Each scheduling queue included in each subsequence is a virtual queue.
  • the virtual queue is a traffic shaping leaky bucket queue.
  • each scheduling queue included in the first queue sequence is a physical queue in the first network device.
  • the cache module 602 is also configured to cache the first message into the first queue sequence in the same sequence as the first message to which the first message belongs if the remaining cache space of the first scheduling queue is greater than or equal to the length of the first message.
  • the first message is cached in the second scheduling queue corresponding to the deterministic flow to which the first message belongs in the second queue sequence;
  • second The queue sequence includes a first number of scheduling queues with continuous cycles, the first queue sequence and the second queue sequence have continuous cycles, and the information of the scheduling queues included in the first queue sequence and the second queue sequence is the same;
  • the forwarding module 603 is also configured to forward the packets in the second scheduling queue to the second network device according to the scheduling cycle of the second scheduling queue.
  • the embodiment of this application also provides a network device, as shown in Figure 7.
  • the network device includes:
  • transceiver 704
  • the machine-readable storage medium 702 stores machine-executable instructions that can be executed by the processor 701; the machine-executable instructions cause the processor 701 to perform the following steps:
  • the first queue sequence includes a first number of periodic consecutive scheduling queues, and the first number is The ratio between the outbound interface rate of the first network device and the minimum rate of the inbound interface of the first network device, where the outbound interface rate is the rate of the outbound interface used to forward the first packet;
  • the transceiver 704 forwards the message in the first scheduling queue to the second network device according to the scheduling cycle of the first scheduling queue.
  • each scheduling queue in the first queue sequence is configured with at least one maximum transmission unit MTU, and the size of each MTU is 1.5KB.
  • the machine executable instructions when the sending rate of the deterministic flow is less than or equal to the minimum rate of the ingress interface, the machine executable instructions also cause the processor 701 to perform the following steps:
  • the machine executable instructions When the sending rate of the deterministic flow is greater than the minimum rate of the ingress interface, the machine executable instructions also cause the processor 701 to perform the following steps:
  • the first message into one of the second number of continuous scheduling queues corresponding to the deterministic flow in the first queue sequence; the second number is the ratio of the sending rate to the minimum rate of the incoming interface, rounded up. value after.
  • the first queue sequence includes multiple subsequences
  • the machine executable instructions When the sending rate of the deterministic flow is less than or equal to the minimum rate of the ingress interface, the machine executable instructions also cause the processor 701 to perform the following steps:
  • each subsequence in the first queue sequence includes a designated scheduling queue corresponding to the deterministic flow ;
  • the machine executable instructions When the sending rate of the deterministic flow is greater than the minimum rate of the ingress interface, the machine executable instructions also cause the processor 701 to perform the following steps:
  • each sub-sequence in the first queue includes The second number of consecutive periodic scheduling queues corresponding to the deterministic flow.
  • the second number is the ratio of the sending rate to the minimum rate of the incoming interface, rounded up.
  • the minimum rate of the incoming interface is 100M; the minimum rate of the outgoing interface is 10GE.
  • the scheduling cycle duration of the first queue sequence is the ratio of the length of the first queue sequence to the outbound interface rate, and the scheduling cycle duration of each scheduling queue included in the first queue sequence is the same.
  • each subsequence included in the first queue sequence is a physical queue in the first network device
  • Each scheduling queue included in each subsequence is a virtual queue.
  • the virtual queue is a traffic shaping leaky bucket queue.
  • each scheduling queue included in the first queue sequence is a physical queue in the first network device.
  • machine executable instructions also cause the processor 701 to perform the following steps:
  • the first message is cached in the second scheduling queue corresponding to the deterministic flow to which the first message belongs in the second queue sequence;
  • second The queue sequence includes a first number of scheduling queues with continuous cycles, the first queue sequence and the second queue sequence have continuous cycles, and the information of the scheduling queues included in the first queue sequence and the second queue sequence is the same;
  • the transceiver 704 forwards the message in the second scheduling queue to the second network device according to the scheduling cycle of the second scheduling queue.
  • the network device may also include a communication bus 703.
  • the processor 701, the machine-readable storage medium 702 and the transceiver 704 complete communication with each other through the communication bus 703.
  • the communication bus 703 can be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard structure. (Extended Industry Standard Architecture, EISA) bus, etc.
  • PCI peripheral component interconnect standard
  • EISA Extended Industry Standard Architecture
  • the communication bus 703 can be divided into an address bus, a data bus, a control bus, etc.
  • the transceiver 704 may be a wireless communication module. Under the control of the processor 701, the transceiver 704 performs data interaction with other devices.
  • the machine-readable storage medium 702 may include random access memory (Random Access Memory, RAM) or non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk storage.
  • RAM Random Access Memory
  • NVM Non-Volatile Memory
  • the machine-readable storage medium 702 may also be at least one storage device located remotely from the aforementioned processor.
  • the processor 701 can be a general-purpose processor, including a central processing unit (CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processing, DSP), an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU central processing unit
  • NP Network Processor
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the embodiments of the present application also provide a machine-readable storage medium.
  • the machine-readable storage medium stores machine-executable information that can be executed by the processor. instruction.
  • the processor is caused by machine-executable instructions to implement the steps of any of the above message forwarding methods.
  • a computer program product containing instructions is also provided, which, when run on a computer, causes the computer to perform the steps of any of the message forwarding methods in the above embodiments.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), etc.

Abstract

本申请提供了一种报文转发方法及装置,涉及网络通信技术领域,本申请方案包括:接收来自用户侧设备的第一报文;将第一报文缓存到第一队列序列中与第一报文所属的确定性流对应的第一调度队列中,第一队列序列中包括第一数量个周期连续的调度队列,第一数量为第一网络设备的出接口速率与第一网络设备的入接口最小速率之间的比值,出接口速率为用于转发第一报文的出接口的速率;按照第一调度队列的调度周期,向第二网络设备转发第一调度队列中的报文。可以在确定性流的传输速率跨度较大的情况下,实现对各确定性流的确定性传输,更加符合确定性业务的需求。

Description

一种报文转发方法及装置 技术领域
本申请涉及网络通信技术领域,尤其涉及一种报文转发方法及装置。
背景技术
随着工业互联网、元宇宙的发展,远程互动业务对网络的时延、抖动、丢包都提出了更严格的要求,确定性网络技术成为满足以上需求的广域网解决方案。
确定性网络是指为承载的业务提供确定性业务保证能力的网络,能保证业务的确定性时延、时延抖动、丢包率等指标,确定性网络技术是一种新型的服务质量(Quality of Service,QoS)保障技术。
目前,确定性网络可以基于周期性循环排队转发(Cyclic Specific Queuing and Forwarding,CSQF)机制实现,软件定义网络(Software Defined Network,SDN)控制器可以规划确定性业务报文在确定性网络中的转发路径,并规定确定性网络中的每一跳网络设备的CSQF转发资源,以使得网络设备按照规定的CSQF转发资源转发确定性业务报文。
然而,目前的确定性网络的转发技术不够成熟,且确定性业务的传输速率需求跨度较大,例如,最小的传输速率需求可以小于100Mbps,而最大的传输速率需求可以大于100Gbps,目前的CSQF机制无法实现对多种传输速率跨度较大的确定性业务报文进行转发。
发明内容
本申请实施例的目的在于提供一种报文转发方法及装置,实现对多种传输速率跨度较大的确定性业务报文进行转发。具体技术方案如下:
第一方面,本申请实施例提供了一种报文转发方法,所述方法应用于第一网络设备,所述方法包括:
接收来自用户侧设备的第一报文;
将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中,所述第一队列序列中包括第一数量个周期连续的调度队列,所述第一数量为所述第一网络设备的出接口速率与所述第一网络设备的入接口最小速率之间的比值,所述出接口速率为用于转发所述第一报文的出接口的速率;
按照所述第一调度队列的调度周期,向第二网络设备转发所述第一调度队列中的报文。
第二方面,本申请实施例提供了一种报文转发装置,所述装置应用于第一网络设备,所述装置包括:
接收模块,用于接收来自用户侧设备的第一报文;
缓存模块,用于所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中,所述第一队列序列中包括第一数量个周期连续的调度队列,所述 第一数量为所述第一网络设备的出接口速率与所述第一网络设备的入接口最小速率之间的比值,所述出接口速率为用于转发所述第一报文的出接口的速率;
转发模块,用于按照所述第一调度队列的调度周期,向第二网络设备转发所述第一调度队列中的报文。
第三方面,本申请实施例提供了一种网络设备,所述网络设备包括:
处理器;
收发器;
机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述机器可执行指令促使所述处理器执行以下步骤:
通过所述收发器接收来自用户侧设备的第一报文;
将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中,所述第一队列序列中包括第一数量个周期连续的调度队列,所述第一数量为所述第一网络设备的出接口速率与所述第一网络设备的入接口最小速率之间的比值,所述出接口速率为用于转发所述第一报文的出接口的速率;
通过所述收发器按照所述第一调度队列的调度周期,向第二网络设备转发所述第一调度队列中的报文。
第四方面,本申请实施例提供了一种机器可读存储介质,存储有机器可执行指令,在被处理器调用和执行时,所述机器可执行指令促使所述处理器:实现上述第一方面所述的方法步骤。
第五方面,本申请实施例提供了一种计算机程序产品,所述计算机程序产品促使所述处理器:实现上述第一方面所述的方法步骤。
采用上述技术方案,第一网络设备接收到用户侧设备发送的第一报文后,可以将第一报文缓存到第一队列序列中与第一报文所属的确定性流对应的第一调度队列中,并按照第一调度队列的调度周期,向第二网络设备转发第一调度队列中的报文。因第一队列序列包括第一数量个周期连续的调度队列,第一数量为第一网络设备用于转发第一报文的出接口的速率和第一网络设备入接口最小速率的比值,即调度队列的数量取决于入接口最小速率,如此可以使得第一网络设备的各入接口同时接收到的每个确定性流的报文,均有对应的调度队列对其进行缓存。如此,即使各确定性流的传输速率跨度较大,各确定性流的报文也均可以被缓存至确定性流对应的调度队列中,并在各自的调度队列对应的调度周期中被发送出去,可以在确定性流的传输速率跨度较大的情况下,实现对各确定性流的确定性传输,更加符合确定性业务的需求。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的实施例。
图1为本申请实施例提供的一种确定性流的转发机制的示意图;
图2为本申请实施例提供的一种确定性网络的网络架构示意图;
图3为本申请实施例提供的一种报文转发方法的流程示意图;
图4为本申请实施例提供的确定性流传输示意图;
图5为本申请实施例提供的另一种报文转发方法的流程示意图;
图6为本申请实施例提供的一种报文转发装置的结构示意图;
图7为本申请实施例提供的一种网络设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了便于理解,首先对本申请实施例中涉及的相关概念进行说明。
确定性网络中每个网络设备将每个周期T划分为多个时长相同且连续的小周期,例如,将T划分为4个小周期,分别为Cycle(周期)0、Cycle1、Cycle2和Cycle3,特定的确定性流只在规定的小周期内被转发。比如,确定性流0在每个周期T内的Cycle0被转发,确定性流1在每个周期T内的Cycle1被转发,确定性流2在每个周期T内的Cycle2被转发,确定性流3在每个周期T内的Cycle3被转发,如此可以使得网络设备的时延抖动被限制在T以内,从而实现有界的时延抖动。
确定性流的转发路径上每一网络设备的抖动都不会增加该网络设备的之后的一个网络设备的时延抖动,即抖动与转发路径包括的网络设备数量无关。但是网络设备数量的增加会使得报文在转发路径上的总延时会增加。
上述周期T为预定义的确定性流的调度队列的时隙宽度,则整个转发路径的时延抖动范围为0~2T,比如T=10us,则时延抖动在最恶劣情况下为20us,与转发路径长度、网络设备数量无关。
如图1所示,假设X、Y、Z和W为转发路径中连续的四个网络设备,每一网络设备的循环转发周期T由0、1、2、3四个小周期组成,每个小周期的时长为10us。每一网络设备接收到报文后,在该报文所属确定性流对应的预设的周期内发送该报文,即每一网络设备中预先配置了确定性流与周期的映射关系。
例如,X设备在周期0内发出报文,该报文在X设备与Y设备之间的链路中传输。
Y设备接收到该报文后,在周期2内将该报文发出,进而该报文在Y设备与Z设备之间的链路中传输。
Z设备接收到该报文后,在自身的周期1内将该报文发出,进而该报文在Z设备与W设备之间的链路中传输,W可接收到该报文。
在上述过程中,受稳定的循环映射关系约束,一旦该报文在X的发送周期确定,则该报文在W的接收周期也可以确定,X至W每次传输确定性流的报文的时延抖动可以控制 在10us以内。
需要说明的是,确定性网络中的各网络设备之间的时钟可以同步,图1中示例性地示出了各网络设备之间的时钟有细微差异的情况。
如图2所示,图2为一种确定性网络的网络架构示意图,以人机接口(Human Machine Interface,HMI)与机械设备(robotic)之间的确定性网络为例,该确定性网络中包括服务提供商网络边缘(Provider Edge,PE)设备和P设备,P(Provider)设备是指网络侧核心设备。图2中的HMI与robotic均为用户侧设备。
图2中示例性地示出了PE1、PE2以及P1至P4,实际实现中各设备的数量不限于此。
其中,PE设备用于实现确定性网络中用户侧设备和网络侧设备之间的报文转发。
SDN控制器可以预先为PE设备规划从用户侧设备进入确定性网络的报文的SRv6转发路径,并为转发路径中的每一跳网络设备规划转发资源,使得各网络设备按照指定的资源转发确定性流的报文。
其中,从用户侧设备进入确定性网络的报文,是指用户侧设备发出的通过以太网封装的有确定性业务需求的IP报文,即确定性流的IP报文。
确定性流属于时延敏感的业务流量,非确定性流为非时延敏感的业务流量,例如,非确定性流可以为适用于尽力而为(Best Effort)转发策略的流。
PE设备中具有确定性流专用的用户侧接口,不与非确定性流混合使用。如果用户侧接口需要被确定性流和非确定性流混合使用,则可以通过时间敏感网络(Time Sensitive Network,TSN)技术将确定性流和非确定性流区分开。
PE设备通过用户侧接口接收到用户侧设备发送的报文后,PE设备按照SDN规划的SRv6转发路径和指定的调度周期,通过网络侧接口将报文转发给下一网络设备,以使报文在确定性网络中传输。其中,确定性网络中传输的是通过具有时间同步机制的以太网封装的SRv6报文,也就是说,PE设备接收到用户侧设备发送的报文后,会将该报文封装为SRv6报文,且确定性网络中的各网络设备的时间同步。
PE设备的网络侧接口也可以接收P设备转发的SRv6报文,并按照SRv6转发路径将接收到的SRv6报文传输至目的用户侧接口,并按照先到先出的调度机制在用户侧接口将报文发送给用户侧设备。
P设备主要负责网络侧设备和网络侧设备之间的报文转发,P设备的入接口按照接收的报文的SRv6转发路径,采用预先指定的调度周期对报文进行转发。
为了使得确定性网络更加符合业务需求,本申请实施例提供了一种报文转发方法,该方法应用于第一网络设备,第一网络设备可以为确定性网络中的PE设备,如图3所示,该方法包括:
S301、接收来自用户侧设备的第一报文。
S302、将第一报文缓存到第一队列序列中与第一报文所属的确定性流对应的第一调度队列中。
其中,第一队列序列中包括第一数量个周期连续的调度队列,第一数量为第一网络设备的出接口速率与第一网络设备的入接口最小速率之间的比值,出接口速率为用于转发第 一报文的出接口的速率。
第一队列序列对应的周期可以为T,第一队列序列中包括的每个调度队列的调度周期为周期T包括的一个小周期。
第一网络设备的出接口是指第一网络设备的网络侧出接口,第一网络设备的入接口是指第一网络设备的用户侧入接口,即第一网络设备从用户侧入接口接收到的来自用户侧设备的报文,可以通过网络侧出接口进行转发。
第一网络设备可以有多个入接口,每个入接口的接入速率可以不同,入接口最小速率为第一网络的多个入接口的接入速率中的最小速率。
本申请实施例中第一网络设备为确定性网络中的设备,利用确定性网络的确定性流转发机制可以将随机到达的各确定性流的报文,在固定的调度周期发送出去。所以,第一队列序列中的调度队列的数量需要能够满足以下条件:所有需要由第一网络设备转发的确定性流的报文同时到达时,第一队列序列中需要有用于缓存每一确定性流的报文的调度队列,从而确保每个确定性流的报文都可以分别被缓存到一个调度队列中。为满足上述条件,调度队列的数量为第一网络设备的出接口速率与第一网络设备的入接口最小速率的比值。
例如,如图4所示,图4中的四条线段分别代表四种速率的出接口的调度资源,图4中每两个圆点之间为一个队列序列,每个菱形代表一个调度序列。
假设确定性流1、确定性流2和其他确定性流(图4中未示出)的传输速率均为100Mbps,各确定性流中包括被连续发送的报文,每个报文的长度均为1.5KB。若这些确定性流的报文同时到达PE设备,则PE设备需要为每条确定性流分别预留一个调度队列。
图4中,确定性流1的报文可以被缓存在每个队列序列的第一个调度队列中,确定性流2的报文可以被缓存在每个队列序列的第二个调度队列中。
若每个调度队列的大小为1.5KB,则对于速率为1GE的出接口,每个调度队列的调度周期时长为15us;对于速率为10GE、100GE、1T的出接口,每个调度队列的调度周期分别为1.5us、150ns、15ns。
S303、按照第一调度队列的调度周期,向第二网络设备转发第一调度队列中的报文。
本申请实施例中,第一队列序列包括第一数量个连续的调度队列,每一调度队列均对应一个调度周期,第一网络设备按照各个调度队列的调度周期,向第二网络设备转发第一队列序列中缓存的报文。
第二网络设备为确定性网络中与第一网络设备连接的下一跳设备,例如,第一报文在该确定性网络中的转发路径PE1-P1-P2-PE2,则第一网络设备可以为PE1,第二网络设备为P1。
采用本申请实施例,第一网络设备接收到用户侧设备发送的第一报文后,可以将第一报文缓存到第一队列序列中与第一报文所属的确定性流对应的第一调度队列中,并按照第一调度队列的调度周期,向第二网络设备转发第一调度队列中的报文。因第一队列序列包括第一数量个周期连续的调度队列,第一数量为第一网络设备中用于转发第一报文的出接口的速率和第一网络设备入接口最小速率的比值,即调度队列的数量取决于入接口最小速率,如此可以使得第一网络设备的各入接口同时接收到的每个确定性流的报文,均有对应 的调度队列对其进行缓存。如此,即使各确定性流的传输速率跨度较大,各确定性流的报文也均可以被缓存至确定性流对应的调度队列中,并在各自的调度队列对应的调度周期中被发送出去,可以在确定性流的传输速率跨度较大的情况下,实现对各确定性流的确定性传输,更加符合确定性业务的需求。
可选地,第一队列序列中的每个调度队列被配置为至少一个最大传输单元(Maximum Transmission Unit,MTU),每个MTU的大小为1.5KB。
由于确定性业务的报文长度跨度较大,比如报文长度跨度可以为64B至1.5KB,在确定性网络的确定性流的转发机制中,存在不能支持上述报文长度跨度的问题。为了实现确定性业务,调度队列的大小至少需要可以满足确定性业务的MTU大小,以确保确第一网络设备有用于实现确定性业务的确定性的转发能力。
本申请实施例中,每个调度队列的大小至少为1.5KB,即该调度队列可以容纳64B至1.5KB区间内任意长度的报文,可以实现对报文长度跨度较大的多种报文的缓存及转发。
在每个调度队列被配置为一个MTU的情况下,转发该调度队列中的报文时的带宽利用率最差的情况是50%左右。例如,若针对一个确定性流,连续接收到1个751B的报文和1个750B的报文,那么一个调度队列在缓存第1个751B的报文后,剩余空间不足以缓存750B的报文,那么该调度队列中只有这个751B的报文在该调度队列的调度周期被发送,带宽利用率为50%左右。
同理,若每个调度队列被配置为2个MTU,则最差的带宽利用率为66%左右;若每个调度队列被配置为3个MTU,则最差的带宽利用率为75%左右。
所以,如果业务可以允许50%的带宽利用率,则每个调度队列可以被配置为1个MTU。对于需要将带宽利用率提高到66%的需求,则每个调度队列可以被配置为2个MTU。对于需要将带宽利用率提高到77%的需求,则每个调度队列可以被配置为3个MTU。
作为示例,以每个调度队列被配置为1个MTU为例,在第一网络设备的入接口最小速率为100Mbps的情况下,如表1所示,若第一网络设备的出接口速率为1Gbps,则第一队列序列包括的调度队列数量为1Gbps/100Mbps=10,第一队列序列的长度为10MTU=15KB。
若第一网络设备的出接口速率为10Gbps,则第一队列序列包括的调度队列数量为10Gbps/100Mbps=100,第一队列序列的长度为100MTU=150KB。
若第一网络设备的出接口速率为100Gbps,则第一队列序列包括的调度队列数量为100Gbps/100Mbps=1000,第一队列序列的长度为1000MTU=1.5MB。
若第一网络设备的出接口速率为1Tbps,则第一队列序列包括的调度队列数量为1Tbps/100Mbps=10000,第一队列序列的长度为10000MTU=15MB。
表1
Figure PCTCN2022112753-appb-000001
Figure PCTCN2022112753-appb-000002
同理,以每个调度队列被配置为1个MTU为例,在第一网络设备的入接口最小速率为1000Mbps的情况下,第一队列序列包括的调度队列数量和第一队列序列的长度信息如表2所示。
表2
Figure PCTCN2022112753-appb-000003
以下对第一队列序列的调度周期进行解释说明。
在本申请实施例中,第一队列序列的调度周期时长为第一队列序列的长度与出接口速率的比值。由于第一队列序列包括的各调度队列的长度相同,所以第一队列序列包括的每个调度队列的调度周期时长为:第一调度队列的调度周期时长与第一调度队列包括的调度队列数量的比值,且第一队列序列包括的每个调度队列的调度周期时长相同。
其中,第一队列序列的调度周期时长T可以理解为第一队列序列中的所有报文被转发出去所需的时长。
具体计算公式为:
第一队列序列周期T=帧长度*8*第一队列序列中包括的MTU数量/出接口速率。
网络设备之间可以以帧(frame)为单位进行数据传输,所以第一网络设备可以将各报文封装为帧,每个帧包括MTU、互联网协议第4版(Internet Protocol Version 4,IPv4)头或互联网协议第6版(Internet Protocol Version 6,IPv6)头、以太网目的MAC(Destination MAC,DMAC)地址、以太网源MAC(Source MAC,SMAC)地址、类型(type)、以太网MAC循环冗余校验码(Ethernet MAC Cyclic Redundancy Check,Ethernet MAC CRC)、字节帧间隙和前导码。
进而,帧长度=1.5KB(MTU的长度)+20字节(IPv4头的长度)或40字节(IPv6头的长度)+14字节(6字节的DMAC+6字节的SMAC+2字节的Type)+4字节(Ethernet MAC CRC的长度)+12字节(字节帧间隙)+8字节(前导码长度)。
以每个调度队列被配置为1个MTU为例,在第一网络设备的入接口最小速率为100Mbps的情况下,各出接口速率对应的第一队列序列的长度和第一队列序列的调度周期如表3所示。
例如,在出接口速率为1G时,第一队列序列周期=(1.5*1024+40+14+4+12+8)*8*10/1G ≈126.56us。
表3
Figure PCTCN2022112753-appb-000004
以每个调度队列被配置为1个MTU为例,在第一网络设备的入接口最小速率为1000Mbps的情况下,各出接口速率对应的第一队列序列的长度和第一队列序列的调度周期如表4所示。
表4
Figure PCTCN2022112753-appb-000005
可以理解的是,如果每个调度队列被配置的MTU增加,则第一队列序列的调度周期长度也会增加。
以每个调度队列被配置为2个MTU为例,在第一网络设备的入接口最小速率为100Mbps的情况下,各出接口速率对应的第一队列序列的长度和第一队列序列的调度周期如表5所示。
表5
Figure PCTCN2022112753-appb-000006
以每个调度队列被配置为2个MTU为例,在第一网络设备的入接口最小速率为1000Mbps的情况下,各出接口速率对应的第一队列序列的长度和第一队列序列的调度周 期如表6所示。
表6
Figure PCTCN2022112753-appb-000007
可以理解的是,为了实现对每个确定性流转发过程的确定性时延,需预先为每个确定性流预留第一队列序列中的调度队列。PE设备负责转发的不同的确定性流的发送速率可以不同,发送速率越快,所需的调度队列的缓存空间越大。
基于此,在本申请实施例中,一方面,若确定性流的发送速率小于等于入接口最小速率,则第一队列序列中的一个指定调度队列用于缓存该确定性流的报文。
其中,如果确定性流的发送速率小于等于入接口最小速率,说明确定性流的发送速率比较慢,使用一个调度队列足够缓存该确定性流的报文,所以为该确定性流分配一个指定调度队列,以提高资源利用率。
例如,入接口最小速率为100Mbps,确定性流的发送速率为90Mbps,则该确定性流对应第一队列序列中的一个指定调度队列。
相应地,当确定性流的发送速率小于等于入接口最小速率时,上述S302具体可以实现为:
将第一报文缓存到第一队列序列中与该确定性流对应的一个指定调度队列中。
另一方面,若确定性流的发送速率大于入接口最小速率,则第一队列序列中第二数量个周期连续的调度队列用于缓存该确定性流的报文。其中,第二数量为发送速率与入接口最小速率的比值向上取整后的值。
其中,如果确定性流的发送速率大于入接口最小速率,说明确定性流的发送速率较快,使用一个调度队列不足以缓存该确定性流的报文,所以为该调度队列分配一个以上的调度队列,以避免该确定性流的报文被丢包影响业务。
例如,入接口最小速率为100Mbps,确定性流的发送速率为150Mbps,则该确定性流对应第一队列序列中的2个调度队列。
相应地,当确定性流的发送速率大于入接口最小速率时,上述S302具体可以实现为:
将第一报文缓存到第一队列序列中与该确定性流对应的第二数量个周期连续的调度队列中的一个调度队列中。
需要说明的是,第一网络设备接收到该确定性流的报文,需要按照接收到的报文的顺序,将接收到的报文依次缓存于第二数量个周期连续的调度队列中,第二数量个周期连续的调度队列会被依次占用。当第一网络设备接收到第一报文时,会将第二报文缓存到第二数量个周期连续的调度队列中的,暂未被完全占用且剩余空间足够缓存第一报文的一个调 度队列中。
通过上述两个方面,在尽可能提高资源利用率的基础上,发送速率较低和较高的确定性流都对应有与发送速率匹配数量的调度队列,如此既保证了对发送速率较低的确定性流的确定性转发,又保证了对发送速率较高的确定性流的确定性转发,可以满足各类发送速率的确定性业务需求。
可选地,确定性流的传输还可能存在微突发的情况,微突发是指在短时间内接收到非常多的突发数据,瞬时突发速率远超出平均速率的情况。
例如,若第一网络设备的入接口的带宽为100Mbps,出接口速率为1GE,这种情况下可能会产生微突发,有可能连续到达该入接口的确定性流的报文的瞬时速率超过100Mbps,而第一队列序列中没有足够的空间缓存这些报文,将导致丢包,从而无法保证确定性流的确定性时延。
为了解决该问题,本申请实施例的第一网络设备中可以设置多个队列序列,以缓存微突发的报文。但是每个队列序列中的报文需要被依次发送,所以队列序列的数量越多,能够缓存的微突发报文越多,但微突发报文的时延也会越大,会影响确定性流的时延抖动,本申请实施例中为了平衡对微突发报文的缓存能力以及确定性流的时延抖动,可以设置两个队列序列,即第一队列序列和第二队列序列,对于提前到达入接口的报文,第一网络设备可以将报文缓存到第二队列序列中该报文所属确定性流对应的调度队列中。
在第一网络设备中设置有两个队列序列的情况下,如图5所示,该方法包括:
S501、接收来自用户侧设备的第一报文。
其中,S501与S301相同,可参考S301中的相关描述。
S502、判断第一调度队列的剩余缓存空间是否小于第一报文的长度。
若否,则执行S503-S504;若是,则执行S505-S506。
S503、将第一报文缓存到第一队列序列中与第一报文所属的确定性流对应的第一调度队列中。
S504、按照第一调度队列的调度周期,向第二网络设备转发第一调度队列中的报文。
其中,S503-S504与S302-S303相同,可参考S302-S303中的相关描述。
S505、若第一调度队列的剩余缓存空间小于第一报文的长度,则将第一报文缓存到第二队列序列中与第一报文所属的确定性流对应的第二调度队列中。
其中,第二队列序列中包括第一数量个周期连续的调度队列,第一队列序列与第二队列序列的周期连续,且第一队列序列与第二队列序列中包括的调度队列的信息相同。
也就是说,第二队列序列中的每个调度队列被配置为至少一个MTU,每个MTU的大小为1.5KB。
第一队列序列与第二队列序列包括的调度队列数量相同,第一队列序列与第二队列序列包括的调度队列的大小和调度周期长度均相同。
并且,第一队列序列与第二队列序列的调度周期时长相同。
此外,第一队列序列与第二队列序列中相同位置的调度序列用于缓存相同的确定性流的报文。
第一网络设备接收到第一报文后,若第一调度队列的剩余缓存空间小于第一报文的长度,则说明当前发生了微突发的情况,第一网络设备可将第一报文缓存到第二队列序列中的第二调度队列中。
S506、按照第二调度队列的调度周期,向第二网络设备转发第二调度队列中的报文。
其中,第二调度队列在第二队列序列中的位置,与第一调度队列在第一队列序列中的位置相同。
第一队列序列与第二队列序列的调度周期时长均为T,第一网络设备先在第一队列序列的周期T中,向第二网络设备转发第一队列序列中的报文,然后在第二队列序列的周期T中,向第二网络设备转发第二队列序列中的报文。
即,第一队列序列和第二队列序列的调度周期总时长为2T。
此外,也可以通过设置每个调度队列被配置的MTU数量解决微突发的问题,例如,若第一网络设备的入接口最小速率为100Mbps,MTU=1.5KB,每个调度队列被配置为2个MTU,则每个队列序列的调度周期时长为253us,在设置两个队列序列的情况下,可以缓冲接近506us的微突发。
采用该方法,在接收到第一报文后,在第一调度队列的剩余缓存空间足以缓存第一报文的情况下,将第一报文缓存到第一队列序列第一调度队列中;在第一调度队列的剩余缓存空间不足以缓存第一报文的情况系下,将第一报文缓存到第二队列序列的第二调度队列中。如此,在有微突发的情况下,即使第一队列序列不足以缓存微突发的报文,还可以利用第二队列序列中的调度队列对突发的报文进行缓存,可以缓解微突发的问题。
在上述实施例的基础上,本申请实施例还需考虑时延抖动的要求,最大时延抖动是2个队列序列的周期长度,即2T。
假设入接口最小速率为100Mbps,每个调度队列被配置为1个MTU,则1个队列序列的周期T≈150us,2个队列序列的周期2T≈300us,即最大时延抖动为300us。如果每个调度队列被配置为两个MTU,则1个队列序列的周期T≈300us,2个队列序列的周期2T≈600us,即最大时延抖动为600us。
如果业务对时延抖动较为敏感,在本申请另一实施例中,为了使得队列序列适用于时延抖动更敏感的业务,还可以为第一队列设置多个子序列,每个子序列中包括多个调度队列。对于时延敏感的业务的确定性流,可以预先为该确定性流分配多个子序列中的调度队列,从而增加该确定性流的发送频率,降低时延抖动。
例如,如果业务要求的最大时延为60us左右,则第一队列序列中每个子序列的调度周期可以为30us。
需要说明的是,受网络侧上行带宽和MTU的限制,单个序列的调度周期不能太短,否则会无法发送一个MTU报文。本申请实施例中可以平衡对带宽利用率的牺牲或者对时延抖动的牺牲,可以设置每个队列序列中包括的序列数不超过10个,且在队列序列包括子序列的情况下,可实现的入接口最小速率为100M,出接口速率的最小值为10GE。
同样地,第二队列序列中也可以包括多个子序列。
在第一队列序列包括子序列的情况下,一方面,若确定性流的发送速率小于等于入接 口最小速率,则每个子序列中的一个指定调度队列用于缓存该确定性流的报文。
相应地,当确定性流的发送速率小于等于入接口最小速率时,上述S302具体可以实现为:
将第一报文缓存到第一队列序列的一个子序列中的,与该确定性流对应的一个指定调度队列中。
其中,第一队列序列中的每个子序列中包括确定性流对应的一个指定调度队列。该确定性流的报文会依次占用第一队列序列中的每个子序列中的指定调度队列,例如,若第一队列序列中包括5个子序列,当前前3个子序列中的指定调度队列均已被完全占用,则可将第一报文缓存到第4个子序列中的指定调度队列中。
另一方面,若确定性流的发送速率大于入接口的最小速率,则每个子序列中第二数量个周期连续的调度队列用于缓存确定性流的报文,第二数量为发送速率与入接口最小速率的比值向上取整后的值。
相应地,当确定性流的发送速率大于入接口的最小速率时,上述S302具体可以实现为:
将第一报文缓存到第一队列序列的一个子序列中的,与该确定性流对应的第二数量个周期连续的调度队列中的一个调度队列中。
其中,第一队列中的每个子序列中包括确定性流对应的第二数量个周期连续的调度队列。第二数量为发送速率与入接口最小速率的比值向上取整后的值。
需要说明的是,第一网络设备接收到该确定性流的报文时,会按照接收到的报文的顺序,依次将报文缓存于第一队列序列中为该确定性流分配的多个子序列中,该确定性流的报文会依次占用多个子序列中与该确定性流对应的调度队列。
例如,若第一队列序列包括5个子序列,每个子序列中相同位置的连续2个调度队列已被分配给该确定性流,假设当前前两个子序列中的连续2个调度队列已被完全占用,第3个子序列中的为该确定性流分配的第1个调度队列已被完全占用,第2个调度队列暂未被完全占用,则将第一报文缓存在第3个子序列中的为该确定性流分配的2个周期连续的调度队列中的第2个调度队列中。
本申请实施例中,确定性流的报文发送频率从几us发送一次至几ms发送一次不等,发送频率跨度大,本申请实施例中通过在队列序列中设置子序列,即可实现按照确定性流的报文的发送频率需求,为确定性流分配调度队列。例如,对于对时延抖动要求较高的确定性流,则可为该确定性流分配每个子序列中的调度队列;对于对时延抖动要求较低的确定性流,则可为确定性流分配间隔固定的几个子序列中的调度队列,甚至只为该确定性流分配整个队列序列中的一个调度序列。如此,可以满足确定性流报文发送频率跨度大的需求。
以下结合具体例子进行说明,若每个调度队列被配置为1个MTU,在第一网络设备的入接口最小速率为100Mbps的情况下,各出接口速率对应的队列序列周期、每个队列序列包括的子序列数、最小时延抖动以及队列序列数量的情况如表7所示。
表7
Figure PCTCN2022112753-appb-000008
若每个调度队列被配置为2个MTU,在第一网络设备的入接口最小速率为100Mbps的情况下,各出接口速率对应的队列序列周期、每个队列序列包括的子序列数、最小时延抖动以及队列序列数量的情况如表8所示。
表8
Figure PCTCN2022112753-appb-000009
以下对上述实施例中的调度队列的实现方式进行说明。
若第一队列序列中不包括子序列,则第一队列序列包括的每个调度队列可以为一个物理队列。
或者,为了防止队列资源消耗过大,每个调度队列也可以为一个虚拟队列。
若第一队列序列中包括子序列,则第一队列序列包括的每个子序列为第一网络设备中的一个物理队列,每个子序列包括的每个调度队列为一个虚拟队列。
可选地,本申请实施例中的每个虚拟队列可以为一个流量整形漏桶队列。即可以采用基于信用的流量整形(Credit-based Traffic Shaping)的漏桶算法实现每个虚拟队列,多个流量整形漏桶队列合用一个物理队列。
作为示例,若每个调度队列被配置为1个MTU,在第一网络设备的入接口最小速率为100Mbps的情况下,各出接口速率对应的队列序列周期、每个队列序列包括的子序列数、最小时延抖动、队列序列数量、每个队列序列包括的漏桶数量、每个子序列中包括的漏桶数量、两个队列序列包括的总漏桶数量以及总缓存(buffer)需求量的情况如表9所示。
表9
Figure PCTCN2022112753-appb-000010
Figure PCTCN2022112753-appb-000011
作为示例,若每个调度队列被配置为2个MTU,在第一网络设备的入接口最小速率为100Mbps的情况下,各出接口速率对应的队列序列周期、每个队列序列包括的子序列数、最小时延抖动、队列序列数量、每个队列序列包括的漏桶数量、每个子序列中包括的漏桶数量、两个队列序列包括的总漏桶数量以及总buffer需求量的情况如表10所示。
表10
Figure PCTCN2022112753-appb-000012
本申请实施例中的第一网络设备可以为PE设备,第二网络设备可以为P设备,P设备接收到PE设备基于本申请实施例中的确定性流的转发机制转发的报文后,无需采用上述实施例中介绍的复杂的队列对报文进行转发,P设备的入接口接收到报文后,可以偏移几个时隙后在出接口发出,只需满足确定性流的最大时延要求。也就是说,P设备可以采用较高的速率,较小的buffer对报文进行转发。
P设备可以按照一个周期T缓存接收到的报文,在P设备的出接口速率较高的情况下,甚至可以利用T/2的周期缓存接收到的报文。
例如,若出接口的速率为1T,如果最大时延要求为150us,则需要15MB的Buffer。按照目前的400G的网络处理器(Neural-network Processing Unit,NPU)系统的时延要求(50us),对Buffer的需求可控。
采用本申请实施例,可以在保证系统和芯片的可实现性基础上,使得确定性网络中确定性流的转发机制满足确定性业务所需的各种接入速率(比如从100Mbps至100Gbps)、报文的各种发送频度(比如从微秒级至秒级)以及报文的各种长度(比如从64B至1.5KB),构建了一个更完善的确定性流的转发机制。
基于相同的发明构思,本申请实施例还提供了一种报文转发装置,该装置应用于第一网络设备,如图6所示,该装置包括:
接收模块601,用于接收来自用户侧设备的第一报文;
缓存模块602,用于将第一报文缓存到第一队列序列中与第一报文所属的确定性流对应的第一调度队列中,第一队列序列中包括第一数量个周期连续的调度队列,第一数量为第一网络设备的出接口速率与第一网络设备的入接口最小速率之间的比值,出接口速率为用于转发第一报文的出接口的速率;
转发模块603,用于按照第一调度队列的调度周期,向第二网络设备转发第一调度队列中的报文。
可选的,第一队列序列中的每个调度队列被配置为至少一个最大传输单元MTU,每个MTU的大小为1.5KB。
可选的,当确定性流的发送速率小于等于入接口最小速率时,缓存模块602,具体用于:
将第一报文缓存到第一队列序列中与确定性流对应的一个指定调度队列中;
或者,
当确定性流的发送速率大于入接口最小速率时,缓存模块602,具体用于:
将第一报文缓存到第一队列序列中与确定性流对应的第二数量个周期连续的调度队列中的一个调度队列中;第二数量为发送速率与入接口最小速率的比值向上取整后的值。
可选的,第一队列序列中包括多个子序列;
当确定性流的发送速率小于等于入接口最小速率时,缓存模块602,具体用于:
将第一报文缓存到第一队列序列的一个子序列中的,与确定性流对应的一个指定调度队列中;第一队列序列中的每个子序列中包括确定性流对应的一个指定调度队列;
或者;
当确定性流的发送速率大于入接口的最小速率时,缓存模块602,具体用于:
将第一报文缓存到第一队列序列的一个子序列中的,与确定性流对应的第二数量个周期连续的调度队列中的一个调度队列中;第一队列中的每个子序列中包括确定性流对应的第二数量个周期连续的调度队列,第二数量为发送速率与入接口最小速率的比值向上取整后的值。
可选的,入接口最小速率的最小值为100M;出接口速率的最小值为10GE。
可选的,第一队列序列的调度周期时长为第一队列序列的长度与出接口速率的比值,第一队列序列包括的每个调度队列的调度周期时长相同。
可选的,第一队列序列包括的每个子序列为第一网络设备中的一个物理队列;
每个子序列包括的每个调度队列为一个虚拟队列。
可选的,虚拟队列为流量整形漏桶队列。
可选的,第一队列序列包括的每个调度队列为第一网络设备中的一个物理队列。
可选的,缓存模块602,还用于若第一调度队列的剩余缓存空间大于等于第一报文的长度,则执行将第一报文缓存到第一队列序列中与第一报文所属的确定性流对应的第一调度队列中的步骤;
若第一调度队列的剩余缓存空间小于第一报文的长度,则将第一报文缓存到第二队列序列中与第一报文所属的确定性流对应的第二调度队列中;第二队列序列中包括第一数量个周期连续的调度队列,第一队列序列与第二队列序列的周期连续,且第一队列序列与第二队列序列中包括的调度队列的信息相同;
转发模块603,还用于按照第二调度队列的调度周期,向第二网络设备转发第二调度队列中的报文。
本申请实施例还提了一种网络设备,如图7所示,该网络设备包括:
处理器701;
收发器704;
机器可读存储介质702,机器可读存储介质702存储有能够被处理器701执行的机器可执行指令;机器可执行指令促使处理器701执行以下步骤:
通过收发器704接收来自用户侧设备的第一报文;
将第一报文缓存到第一队列序列中与第一报文所属的确定性流对应的第一调度队列中,第一队列序列中包括第一数量个周期连续的调度队列,第一数量为第一网络设备的出接口速率与第一网络设备的入接口最小速率之间的比值,出接口速率为用于转发第一报文的出接口的速率;
通过收发器704按照第一调度队列的调度周期,向第二网络设备转发第一调度队列中的报文。
可选的,第一队列序列中的每个调度队列被配置为至少一个最大传输单元MTU,每个MTU的大小为1.5KB。
可选的,当确定性流的发送速率小于等于入接口最小速率时,机器可执行指令还促使处理器701执行以下步骤:
将第一报文缓存到第一队列序列中与确定性流对应的一个指定调度队列中;
或者,
当确定性流的发送速率大于入接口最小速率时,机器可执行指令还促使处理器701执行以下步骤:
将第一报文缓存到第一队列序列中与确定性流对应的第二数量个周期连续的调度队列中的一个调度队列中;第二数量为发送速率与入接口最小速率的比值向上取整后的值。
可选的,第一队列序列中包括多个子序列;
当确定性流的发送速率小于等于入接口最小速率时,机器可执行指令还促使处理器701执行以下步骤:
将第一报文缓存到第一队列序列的一个子序列中的,与确定性流对应的一个指定调度队列中;第一队列序列中的每个子序列中包括确定性流对应的一个指定调度队列;
或者;
当确定性流的发送速率大于入接口的最小速率时,机器可执行指令还促使处理器701执行以下步骤:
将第一报文缓存到第一队列序列的一个子序列中的,与确定性流对应的第二数量个周期连续的调度队列中的一个调度队列中;第一队列中的每个子序列中包括确定性流对应的第二数量个周期连续的调度队列,第二数量为发送速率与入接口最小速率的比值向上取整后的值。
可选的,入接口最小速率的最小值为100M;出接口速率的最小值为10GE。
可选的,第一队列序列的调度周期时长为第一队列序列的长度与出接口速率的比值,第一队列序列包括的每个调度队列的调度周期时长相同。
可选的,第一队列序列包括的每个子序列为第一网络设备中的一个物理队列;
每个子序列包括的每个调度队列为一个虚拟队列。
可选的,虚拟队列为流量整形漏桶队列。
可选的,第一队列序列包括的每个调度队列为第一网络设备中的一个物理队列。
可选的,机器可执行指令还促使处理器701执行以下步骤:
若第一调度队列的剩余缓存空间大于等于第一报文的长度,则执行将第一报文缓存到第一队列序列中与第一报文所属的确定性流对应的第一调度队列中的步骤;
若第一调度队列的剩余缓存空间小于第一报文的长度,则将第一报文缓存到第二队列序列中与第一报文所属的确定性流对应的第二调度队列中;第二队列序列中包括第一数量个周期连续的调度队列,第一队列序列与第二队列序列的周期连续,且第一队列序列与第二队列序列中包括的调度队列的信息相同;
通过收发器704按照第二调度队列的调度周期,向第二网络设备转发第二调度队列中的报文。
如图7所示,网络设备还可以包括通信总线703。处理器701、机器可读存储介质702及收发器704之间通过通信总线703完成相互间的通信,通信总线703可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线703可以分为地址总线、数据总线、控制总线等。
收发器704可以为无线通信模块,收发器704在处理器701的控制下,与其他设备进行数据交互。
机器可读存储介质702可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。另外,机器可读存储介质702还可以是至少一个位于远离前述处理器的存储装置。
处理器701可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
基于同一种发明构思,根据上述本申请实施例提供的报文转发方法,本申请实施例还提供了一种机器可读存储介质,机器可读存储介质存储有能够被处理器执行的机器可执行指令。处理器被机器可执行指令促使实现上述任一报文转发方法的步骤。
在本申请提供的又一实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例中任一报文转发方法的步骤。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算 机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (32)

  1. 一种报文转发方法,其特征在于,所述方法应用于第一网络设备,所述方法包括:
    接收来自用户侧设备的第一报文;
    将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中,所述第一队列序列中包括第一数量个周期连续的调度队列,所述第一数量为所述第一网络设备的出接口速率与所述第一网络设备的入接口最小速率之间的比值,所述出接口速率为用于转发所述第一报文的出接口的速率;
    按照所述第一调度队列的调度周期,向第二网络设备转发所述第一调度队列中的报文。
  2. 根据权利要求1所述的方法,其特征在于,所述第一队列序列中的每个调度队列被配置为至少一个最大传输单元MTU,每个MTU的大小为1.5KB。
  3. 根据权利要求1所述的方法,其特征在于,
    当所述确定性流的发送速率小于等于所述入接口最小速率时,所述将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中,包括:
    将所述第一报文缓存到所述第一队列序列中与所述确定性流对应的一个指定调度队列中;
    或者,
    当所述确定性流的发送速率大于所述入接口最小速率时,所述将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中,包括:
    将所述第一报文缓存到所述第一队列序列中与所述确定性流对应的第二数量个周期连续的调度队列中的一个调度队列中;所述第二数量为所述发送速率与所述入接口最小速率的比值向上取整后的值。
  4. 根据权利要求1所述的方法,其特征在于,所述第一队列序列中包括多个子序列;
    当所述确定性流的发送速率小于等于所述入接口最小速率时,所述将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中,包括:
    将所述第一报文缓存到所述第一队列序列的一个子序列中的,与所述确定性流对应的一个指定调度队列中;所述第一队列序列中的每个子序列中包括所述确定性流对应的一个指定调度队列;
    或者;
    当所述确定性流的发送速率大于所述入接口的最小速率时,所述将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中,包括:
    将所述第一报文缓存到所述第一队列序列的一个子序列中的,与所述确定性流对应的第二数量个周期连续的调度队列中的一个调度队列中;所述第一队列中的每个子序列中包括所述确定性流对应的第二数量个周期连续的调度队列,所述第二数量为所述发送速率与所述入接口最小速率的比值向上取整后的值。
  5. 根据权利要求4所述的方法,其特征在于,所述入接口最小速率为100M;所述出接口速率的最小值为10GE。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述第一队列序列的调度周期 时长为所述第一队列序列的长度与所述出接口速率的比值,所述第一队列序列包括的每个调度队列的调度周期时长相同。
  7. 根据权利要求4所述的方法,其特征在于,所述第一队列序列包括的每个子序列为所述第一网络设备中的一个物理队列;
    每个子序列包括的每个调度队列为一个虚拟队列。
  8. 根据权利要求7所述的方法,其特征在于,所述虚拟队列为流量整形漏桶队列。
  9. 根据权利要求1-3任一项所述的方法,其特征在于,所述第一队列序列包括的每个调度队列为所述第一网络设备中的一个物理队列。
  10. 根据权利要求1所述的方法,其特征在于,在接收来自用户侧设备的第一报文之后,所述方法还包括:
    若所述第一调度队列的剩余缓存空间大于等于所述第一报文的长度,则执行将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中的步骤;
    若所述第一调度队列的剩余缓存空间小于所述第一报文的长度,则将所述第一报文缓存到第二队列序列中与所述第一报文所属的确定性流对应的第二调度队列中;所述第二队列序列中包括所述第一数量个周期连续的调度队列,所述第一队列序列与所述第二队列序列的周期连续,且所述第一队列序列与所述第二队列序列中包括的调度队列的信息相同;
    按照所述第二调度队列的调度周期,向所述第二网络设备转发所述第二调度队列中的报文。
  11. 一种报文转发装置,其特征在于,所述装置应用于第一网络设备,所述装置包括:
    接收模块,用于接收来自用户侧设备的第一报文;
    缓存模块,用于将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中,所述第一队列序列中包括第一数量个周期连续的调度队列,所述第一数量为所述第一网络设备的出接口速率与所述第一网络设备的入接口最小速率之间的比值,所述出接口速率为用于转发所述第一报文的出接口的速率;
    转发模块,用于按照所述第一调度队列的调度周期,向第二网络设备转发所述第一调度队列中的报文。
  12. 根据权利要求11所述的装置,其特征在于,所述第一队列序列中的每个调度队列被配置为至少一个最大传输单元MTU,每个MTU的大小为1.5KB。
  13. 根据权利要求11所述的装置,其特征在于,
    当所述确定性流的发送速率小于等于所述入接口最小速率时,所述缓存模块,具体用于:
    将所述第一报文缓存到所述第一队列序列中与所述确定性流对应的一个指定调度队列中;
    或者,
    当所述确定性流的发送速率大于所述入接口最小速率时,所述缓存模块,具体用于:
    将所述第一报文缓存到所述第一队列序列中与所述确定性流对应的第二数量个周期连 续的调度队列中的一个调度队列中;所述第二数量为所述发送速率与所述入接口最小速率的比值向上取整后的值。
  14. 根据权利要求11所述的装置,其特征在于,所述第一队列序列中包括多个子序列;
    当所述确定性流的发送速率小于等于所述入接口最小速率时,所述缓存模块,具体用于:
    将所述第一报文缓存到所述第一队列序列的一个子序列中的,与所述确定性流对应的一个指定调度队列中;所述第一队列序列中的每个子序列中包括所述确定性流对应的一个指定调度队列;
    或者;
    当所述确定性流的发送速率大于所述入接口的最小速率时,所述缓存模块,具体用于:
    将所述第一报文缓存到所述第一队列序列的一个子序列中的,与所述确定性流对应的第二数量个周期连续的调度队列中的一个调度队列中;所述第一队列中的每个子序列中包括所述确定性流对应的第二数量个周期连续的调度队列,所述第二数量为所述发送速率与所述入接口最小速率的比值向上取整后的值。
  15. 根据权利要求14所述的装置,其特征在于,所述入接口最小速率的最小值为100M;所述出接口速率的最小值为10GE。
  16. 根据权利要求11-15任一项所述的装置,其特征在于,所述第一队列序列的调度周期时长为所述第一队列序列的长度与所述出接口速率的比值,所述第一队列序列包括的每个调度队列的调度周期时长相同。
  17. 根据权利要求14所述的装置,其特征在于,所述第一队列序列包括的每个子序列为所述第一网络设备中的一个物理队列;
    每个子序列包括的每个调度队列为一个虚拟队列。
  18. 根据权利要求17所述的装置,其特征在于,所述虚拟队列为流量整形漏桶队列。
  19. 根据权利要求11-13任一项所述的装置,其特征在于,所述第一队列序列包括的每个调度队列为所述第一网络设备中的一个物理队列。
  20. 根据权利要求11所述的装置,其特征在于,
    所述缓存模块,还用于若所述第一调度队列的剩余缓存空间大于等于所述第一报文的长度,则执行将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中的步骤;
    若所述第一调度队列的剩余缓存空间小于所述第一报文的长度,则将所述第一报文缓存到第二队列序列中与所述第一报文所属的确定性流对应的第二调度队列中;所述第二队列序列中包括所述第一数量个周期连续的调度队列,所述第一队列序列与所述第二队列序列的周期连续,且所述第一队列序列与所述第二队列序列中包括的调度队列的信息相同;
    所述转发模块,还用于按照所述第二调度队列的调度周期,向所述第二网络设备转发所述第二调度队列中的报文。
  21. 一种网络设备,其特征在于,所述网络设备包括:
    处理器;
    收发器;
    机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述机器可执行指令促使所述处理器执行以下步骤:
    通过所述收发器接收来自用户侧设备的第一报文;
    将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中,所述第一队列序列中包括第一数量个周期连续的调度队列,所述第一数量为所述第一网络设备的出接口速率与所述第一网络设备的入接口最小速率之间的比值,所述出接口速率为用于转发所述第一报文的出接口的速率;
    通过所述收发器按照所述第一调度队列的调度周期,向第二网络设备转发所述第一调度队列中的报文。
  22. 根据权利要求21所述的网络设备,其特征在于,所述第一队列序列中的每个调度队列被配置为至少一个最大传输单元MTU,每个MTU的大小为1.5KB。
  23. 根据权利要求21所述的网络设备,其特征在于,
    当所述确定性流的发送速率小于等于所述入接口最小速率时,所述机器可执行指令还促使所述处理器执行以下步骤:
    将所述第一报文缓存到所述第一队列序列中与所述确定性流对应的一个指定调度队列中;
    或者,
    当所述确定性流的发送速率大于所述入接口最小速率时,所述机器可执行指令还促使所述处理器执行以下步骤:
    将所述第一报文缓存到所述第一队列序列中与所述确定性流对应的第二数量个周期连续的调度队列中的一个调度队列中;所述第二数量为所述发送速率与所述入接口最小速率的比值向上取整后的值。
  24. 根据权利要求21所述的网络设备,其特征在于,所述第一队列序列中包括多个子序列;
    当所述确定性流的发送速率小于等于所述入接口最小速率时,所述机器可执行指令还促使所述处理器执行以下步骤:
    将所述第一报文缓存到所述第一队列序列的一个子序列中的,与所述确定性流对应的一个指定调度队列中;所述第一队列序列中的每个子序列中包括所述确定性流对应的一个指定调度队列;
    或者;
    当所述确定性流的发送速率大于所述入接口的最小速率时,所述机器可执行指令还促使所述处理器执行以下步骤:
    将所述第一报文缓存到所述第一队列序列的一个子序列中的,与所述确定性流对应的第二数量个周期连续的调度队列中的一个调度队列中;所述第一队列中的每个子序列中包括所述确定性流对应的第二数量个周期连续的调度队列,所述第二数量为所述发送速率与所述入接口最小速率的比值向上取整后的值。
  25. 根据权利要求24所述的网络设备,其特征在于,所述入接口最小速率的最小值为100M;所述出接口速率的最小值为10GE。
  26. 根据权利要求21-25任一项所述的网络设备,其特征在于,所述第一队列序列的调度周期时长为所述第一队列序列的长度与所述出接口速率的比值,所述第一队列序列包括的每个调度队列的调度周期时长相同。
  27. 根据权利要求24所述的网络设备,其特征在于,所述第一队列序列包括的每个子序列为所述第一网络设备中的一个物理队列;
    每个子序列包括的每个调度队列为一个虚拟队列。
  28. 根据权利要求27所述的网络设备,其特征在于,所述虚拟队列为流量整形漏桶队列。
  29. 根据权利要求21-23任一项所述的网络设备,其特征在于,所述第一队列序列包括的每个调度队列为所述第一网络设备中的一个物理队列。
  30. 根据权利要求21所述的网络设备,其特征在于,所述机器可执行指令还促使所述处理器执行以下步骤:
    若所述第一调度队列的剩余缓存空间大于等于所述第一报文的长度,则执行将所述第一报文缓存到第一队列序列中与所述第一报文所属的确定性流对应的第一调度队列中的步骤;
    若所述第一调度队列的剩余缓存空间小于所述第一报文的长度,则将所述第一报文缓存到第二队列序列中与所述第一报文所属的确定性流对应的第二调度队列中;所述第二队列序列中包括所述第一数量个周期连续的调度队列,所述第一队列序列与所述第二队列序列的周期连续,且所述第一队列序列与所述第二队列序列中包括的调度队列的信息相同;
    通过所述收发器按照所述第二调度队列的调度周期,向所述第二网络设备转发所述第二调度队列中的报文。
  31. 一种机器可读存储介质,其特征在于,存储有机器可执行指令,在被处理器调用和执行时,所述机器可执行指令促使所述处理器:实现权利要求1-10任一所述的方法步骤。
  32. 一种计算机程序产品,其特征在于,所述计算机程序产品促使所述处理器:实现权利要求1-10任一所述的方法步骤。
PCT/CN2022/112753 2022-08-16 2022-08-16 一种报文转发方法及装置 WO2024036476A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/112753 WO2024036476A1 (zh) 2022-08-16 2022-08-16 一种报文转发方法及装置
CN202280002800.XA CN117897936A (zh) 2022-08-16 2022-08-16 一种报文转发方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/112753 WO2024036476A1 (zh) 2022-08-16 2022-08-16 一种报文转发方法及装置

Publications (1)

Publication Number Publication Date
WO2024036476A1 true WO2024036476A1 (zh) 2024-02-22

Family

ID=89940379

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/112753 WO2024036476A1 (zh) 2022-08-16 2022-08-16 一种报文转发方法及装置

Country Status (2)

Country Link
CN (1) CN117897936A (zh)
WO (1) WO2024036476A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108282415A (zh) * 2017-12-29 2018-07-13 北京华为数字技术有限公司 一种调度方法及设备
CN112585914A (zh) * 2020-11-27 2021-03-30 新华三技术有限公司 报文转发方法、装置以及电子设备
CN113382442A (zh) * 2020-03-09 2021-09-10 中国移动通信有限公司研究院 报文传输方法、装置、网络节点及存储介质
CN113950104A (zh) * 2021-08-26 2022-01-18 西安空间无线电技术研究所 一种基于动态周期映射的卫星网络业务确定性调度方法
CN114143378A (zh) * 2021-11-24 2022-03-04 新华三技术有限公司 一种网络优化方法、装置、网关设备及存储介质
WO2022068617A1 (zh) * 2020-09-30 2022-04-07 华为技术有限公司 流量整形方法及装置
CN114374653A (zh) * 2021-12-28 2022-04-19 同济大学 一种基于流量预测的可变比特速率业务调度方法
WO2022095669A1 (zh) * 2020-11-06 2022-05-12 中国移动通信有限公司研究院 一种通信调度方法、装置和存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108282415A (zh) * 2017-12-29 2018-07-13 北京华为数字技术有限公司 一种调度方法及设备
CN113382442A (zh) * 2020-03-09 2021-09-10 中国移动通信有限公司研究院 报文传输方法、装置、网络节点及存储介质
WO2022068617A1 (zh) * 2020-09-30 2022-04-07 华为技术有限公司 流量整形方法及装置
WO2022095669A1 (zh) * 2020-11-06 2022-05-12 中国移动通信有限公司研究院 一种通信调度方法、装置和存储介质
CN112585914A (zh) * 2020-11-27 2021-03-30 新华三技术有限公司 报文转发方法、装置以及电子设备
CN113950104A (zh) * 2021-08-26 2022-01-18 西安空间无线电技术研究所 一种基于动态周期映射的卫星网络业务确定性调度方法
CN114143378A (zh) * 2021-11-24 2022-03-04 新华三技术有限公司 一种网络优化方法、装置、网关设备及存储介质
CN114374653A (zh) * 2021-12-28 2022-04-19 同济大学 一种基于流量预测的可变比特速率业务调度方法

Also Published As

Publication number Publication date
CN117897936A (zh) 2024-04-16

Similar Documents

Publication Publication Date Title
EP1240740B1 (en) Network switch with packet scheduling
EP3942758A1 (en) System and method for facilitating global fairness in a network
EP3410641A1 (en) Network-traffic control method and network device thereof
US11968111B2 (en) Packet scheduling method, scheduler, network device, and network system
WO2022109986A1 (zh) 报文转发方法、装置以及电子设备
CN110944358B (zh) 数据传输方法和设备
US20140192819A1 (en) Packet exchanging device, transmission apparatus, and packet scheduling method
WO2019105317A1 (zh) 一种业务报文发送的方法、网络设备和系统
WO2016008399A1 (en) Flow control
CN113726671B (zh) 一种网络拥塞控制方法及相关产品
CN113726681B (zh) 一种网络拥塞控制方法及网络设备
CN109873773A (zh) 一种用于数据中心的拥塞控制方法
CN109995608B (zh) 网络速率计算方法和装置
WO2021101640A1 (en) Method and apparatus of packet wash for in-time packet delivery
EP3275139B1 (en) Technologies for network packet pacing during segmentation operations
WO2024036476A1 (zh) 一种报文转发方法及装置
WO2014000467A1 (zh) 一种网络虚拟化系统中带宽调整的方法及装置
US11805071B2 (en) Congestion control processing method, packet forwarding apparatus, and packet receiving apparatus
JP4973452B2 (ja) WiMAXスケジューラーの待ち時間カウントを使用した無効データ除去
WO2019165855A1 (zh) 一种报文传输的方法及装置
CN112751776A (zh) 拥塞控制方法和相关装置
WO2022068617A1 (zh) 流量整形方法及装置
CN112714072A (zh) 一种调整发送速率的方法及装置
CN113824652A (zh) 一种用于调度队列的方法及装置
WO2022246710A1 (zh) 一种控制数据流传输的方法及通信装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22955258

Country of ref document: EP

Kind code of ref document: A1