WO2019127597A1 - 一种发送报文的方法、设备和系统 - Google Patents

一种发送报文的方法、设备和系统 Download PDF

Info

Publication number
WO2019127597A1
WO2019127597A1 PCT/CN2017/120430 CN2017120430W WO2019127597A1 WO 2019127597 A1 WO2019127597 A1 WO 2019127597A1 CN 2017120430 W CN2017120430 W CN 2017120430W WO 2019127597 A1 WO2019127597 A1 WO 2019127597A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
time window
enqueue
packets
flow
Prior art date
Application number
PCT/CN2017/120430
Other languages
English (en)
French (fr)
Inventor
张镇星
李楠
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2017/120430 priority Critical patent/WO2019127597A1/zh
Priority to CN202211581634.1A priority patent/CN116016371A/zh
Priority to EP17936799.0A priority patent/EP3720069A4/en
Priority to CN201780097888.7A priority patent/CN111512602B/zh
Publication of WO2019127597A1 publication Critical patent/WO2019127597A1/zh
Priority to US16/916,580 priority patent/US20200336435A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/226Delivery according to priorities

Definitions

  • the embodiments of the present invention relate to the field of network transmission, and in particular, to a method, device, and system for sending a message.
  • a delay-sensitive network generally refers to a communication network applied in a special field such as industrial control.
  • a network has an upper limit on the end-to-end delay of a specific traffic from the sender to the receiver. If the message arrives at the destination later than the promised time, the message may be invalidated and become invalid.
  • each node, port, and other layers on the end-to-end path of the network in which it is located need to reserve certain resources for the traffic, thereby preventing unpredictable congestion during transmission. , generating additional queuing delays.
  • the prior art adopts a global clock synchronization method to transmit delay sensitive traffic.
  • the method first has strict synchronization requirements for the clocks of all nodes of the entire network, and secondly, the entire network maintains a uniform time window beat.
  • the enqueue queue of each port in each time window is statically configured on all nodes of the entire network.
  • the network device After receiving the packet, the network device adds the packet to the queue of the corresponding egress port according to the time window in which the current global clock is located. The added queue will be opened in the next time window, and the packet is scheduled to be sent.
  • Constraint 1 The message sent by the upstream network device in the time window N must be received in the time window N of the next network device.
  • Constraint 2 The message received by the current network device in the time window N must enter the queue in the time window N.
  • the packet can ensure that the packet received by the network device in the Nth time window can be at the N+1 on each network device that passes through the packet.
  • the time window is sent out, and the message will be received in the N+1th time window of the next node.
  • the maximum value of the end-to-end delay of the message is (K+1)*T, where K is the number of hops the message experiences in the network, and T is the global uniform time window width. In this way, there is a promiseable transmission delay.
  • the timing of the enqueue and dequeue of the message is shown in Figure 1. It can be seen from FIG. 1 that the transmission delay and the packet processing delay of the packets transmitted by the source end may be just right due to the transmission delay between the network devices and the packet processing delay. At the end of a time window shown in Figure 1, the queue of the next network device is reached. This requires the source to send packets no later than a certain time. Otherwise, the message cannot be guaranteed to enter the same time window. .
  • the embodiment of the present invention provides a method, a device, a system, and a storage medium for sending a message, which can increase the use bandwidth of the delay sensitive traffic and improve the bandwidth utilization.
  • the embodiment of the present application provides a method for transmitting a packet, where the method is applied to a network device in a transmission system, where the network device receives a packet and identifies a flow to which the packet belongs.
  • the reserved resource includes the number of packets that the flow can send in a time window.
  • the network device is configured to send the number of packets that exist in the queue of the flow according to the number of packets that can be sent in the current window and the total number of packets sent in the one time window. The message is scheduled to be sent in a specific time window.
  • the output time window of the message in the embodiment of the present application is dynamically determined according to real-time information (for example, the case where the message is accumulated in a time window), instead of being completely determined according to the static configuration.
  • real-time information for example, the case where the message is accumulated in a time window
  • the transmission device only needs to ensure that the number of messages sent in each time window conforms to the number of messages that can be sent in a time window, without using each The sending time of the messages is constrained. That is to say, after the number of packets sent by the network device in one time window reaches the number of transmittable messages, the next message is scheduled to be sent to the next time window.
  • the network device can arrange the packets sent by the upstream device at any time in a time window to an appropriate time window, thereby avoiding the constraint on the time for the upstream device to send the message. Increase the available bandwidth of delay-sensitive traffic and reduce the waste of bandwidth resources.
  • the network device is pre-configured with queue resource reservation information and traffic resource reservation information for sending the flow.
  • the queue resource reservation information includes a queue for transmitting the stream and an enqueue timing and an output timing of the queue, where the enqueue timing is used to define an enqueue queue for each time window, and the output timing is used by Define the switch state of each queue in each time window.
  • the traffic resource reservation information records the current enqueue queue of the flow and the packet count for indicating the number of packets in the current enqueue queue. The situation in which the packet is sent in the current time window is the number of packets in the current enqueue queue.
  • the network device sends the message according to the number of messages that can be sent by the stream in a time window and the number of messages that have been sent in the one time window, in a specific time window, including :
  • the arrival time window, the enqueue timing of the queue, the number of packets that the flow can send in a time window, and the number of packets in the current enqueue queue determine the enqueue queue of the flow;
  • the message is added to the determined queue of the flow of the flow; the time window of the queue in which the message is opened is defined in the output sequence to open the queue in which the message is located, and the message is sent.
  • the delay sensitive traffic since each time window can send a message according to the required number, in terms of results, the delay sensitive traffic still has a promiseable end-to-end delay. That is to say, the present embodiment can control the total delay of the packet in the transmission process by defining the enqueue timing and the output timing based on the time window, and does not need to strictly control the delay in each network device by constraining the transmission time. It can ensure that the delay sensitive traffic has a promiseable end-to-end delay, and can increase the available bandwidth of the delay sensitive traffic and reduce the waste of bandwidth resources.
  • the network device according to the arrival time window, the enqueue timing of the queue, the number of packets that the flow can send in a time window, and the current enqueue queue message.
  • the determining the enqueue queue of the flow includes: the network device, when the number of packets in the current enqueue queue has reached the number of packets that can be sent in a time window, the arrival time is The enqueue queue of the next time window of the window is determined to be the enqueue queue of the flow; or, in the case that the number of packets of the current enqueue queue does not reach the number of messages that can be sent in one time window, The enqueue queue of the arrival time window is determined as the enqueue queue of the flow.
  • the number of packets that can be sent in one time window reaches the number of packets that can be sent, and the next packet is sent to the next time window for transmission, which increases flexibility.
  • it can also ensure that the packets sent in the time window meet the requirements of the number of packets sent in the time window, and meet the traffic characteristic requirements of the delay sensitive traffic.
  • one time window has an alternate enqueue queue, wherein the alternate enqueue queue of the previous time window is the enqueue queue of the next time window.
  • the foregoing network device determines, according to the arrival time window, the enqueue sequence of the queue, the number of packets that the flow can send in a time window, and the number of packets in the current enqueue queue.
  • the queue of the flow into the queue including:
  • the network device determines the candidate enqueue queue of the arrival time window as the flow. If the number of packets in the current enqueue queue does not reach the number of packets that can be sent in a time window, the enqueue queue of the arrival time window is determined as the flow queue. Enter the queue.
  • the enqueue queue of the Mth time window is in an open state in the M+1th time window, and is in a closed state in other time windows, and M is greater than or equal to 1. Integer.
  • the embodiment of the present application can improve the available bandwidth of the delay sensitive traffic, and can better ensure the end-to-end delay of the delay sensitive traffic.
  • the method further includes: after determining, by the network device, the enqueue queue of the flow, updating the enqueue recorded in the traffic resource reservation information according to the determined enqueue queue Queue; the network device restores the number of packets recorded in the traffic resource reservation information to an initial value each time the recorded enqueue queue is updated, and each time to the updated enqueue queue When a message is added, the number of the message is accumulated.
  • the calculation in the packet sending process can be reduced, and the processing efficiency is improved.
  • the network device reserves resources for the flow in advance, and configures the traffic resource reservation information in the process of reserving resources.
  • the delay of the delay sensitive traffic in the transmission process can be reduced.
  • the network device is pre-configured with queue resource reservation information and traffic resource reservation information for sending the flow.
  • the queue resource reservation information includes a queue corresponding to the flow and a dequeue gating configured for the queue, and the dequeue gating is used to control the number of packets sent in each time window;
  • the traffic resource reservation information includes the number of packets that the stream can send in a time window.
  • the network device sends the message according to the number of packets that can be sent in a time window and the total number of messages sent in the one time window, and the message is sent in a specific time window, which includes :
  • the network device adds the packet to a queue corresponding to the flow to which the packet belongs, and sends a packet according to the dequeued gating from the queue corresponding to the flow to send the packet.
  • the control is updated according to the time window; wherein, the initial value of the dequeue gating in each time window is the number of messages that can be sent by the flow corresponding to the queue in a time window, and according to each time window The number of sent messages is decremented.
  • the upstream device since the time limit on the time queue is not performed, but the message sent in each time window is controlled during the dequeue process, there is no requirement for the time of receiving the message, and no The generation time constraint of the upstream device enables the upstream device to transmit the delay-sensitive traffic to increase the available bandwidth of the delay-sensitive traffic in almost the entire time window, thereby reducing the waste of bandwidth resources.
  • the network device monitors an update of a time window, and when the time window is updated, obtains, from the traffic resource reservation information, a message that the flow can send in a time window.
  • the number is updated according to the number of messages that the stream can send within a time window.
  • the outbound control is updated by the time window and the number of packets that can be sent in a time window, so that the message sent in each time window meets the requirement of the number of messages sent in the time window. Meets traffic characteristics requirements for time-sensitive traffic.
  • the sending, by the dequeuing, the packet is sent from the queue corresponding to the flow, and the network device checks the packet in the queue corresponding to the flow in real time. If the token exists in the queue corresponding to the flow and the token exists in the token bucket, the packet is sent out until the token in the token bucket is empty or The packets in the queue corresponding to the flow are empty.
  • the embodiment of the present application can improve the available bandwidth of the delay sensitive traffic, and can also ensure that the delay sensitive traffic has a promiseable end-to-end delay.
  • the network device reserves resources for the flow in advance, and configures the traffic resource reservation information and the queue resource reservation information in the process of reserving resources.
  • the queue resources are allocated according to the flow, instead of being allocated based on the time window, the time window alignment of the entire network is not required, and the device can be deployed on a device that does not support time alignment, and the applicability is expanded.
  • the embodiment of the present application provides a method for managing packet transmission, where the method is applied to a network device that can transmit delay sensitive traffic.
  • the network device receives the packet, and identifies that the flow to which the packet belongs is delay-sensitive traffic of the reserved resource.
  • the network device acquires an arrival time window of the packet and traffic resource reservation information of the flow.
  • the traffic resource reservation information records the number of packets that the stream can send in a time window and the number of packets in the current enqueue queue.
  • the network device determines, according to the arrival time window, the number of packets that can be sent in the current window and the number of packets in the current enqueue queue to determine the enqueue queue of the flow, and the packet is sent. Join the queue into the queue and send the message through queue scheduling.
  • the network device in the embodiment of the present application determines the enqueue queue dynamics of the flow based on the arrival time window of the packet, the number of packets that can be sent in the current window and the number of packets in the current enqueue queue. Add messages to different queues and schedule output. In this way, the flexibility in the process of sending a message can be increased, and the transmission device only needs to ensure that the number of messages sent in each time window conforms to the number of messages that can be sent in a time window, without using each The sending time of the messages is constrained. That is to say, after the number of packets sent by the network device in one time window reaches the number of transmittable messages, the next message is scheduled to be sent to the next time window.
  • the network device can arrange the packets sent by the upstream device at any time in a time window to an appropriate time window, thereby avoiding the constraint on the time for the upstream device to send the message. Increase the available bandwidth of delay-sensitive traffic and reduce the waste of bandwidth resources.
  • the network device is pre-configured with queue resource reservation information and traffic resource reservation information for sending the flow, where the queue resource reservation information includes The queue and the enqueue timing of the queue, the enqueue timing is used to define the enqueue queue for each time window.
  • the network device determines, according to the arrival time window, the number of packets that can be sent by the flow in a time window and the number of packets in the current enqueue queue to determine an enqueue queue of the flow, including: the network In the case that the number of packets of the current enqueue queue has reached the number of messages that can be sent in a time window, the device enters the next time window of the arrival time window in the enqueue sequence.
  • the queue is determined to be the enqueue queue for the flow. In the case that the number of packets of the current enqueue queue does not reach the number of messages that can be sent in one time window, the enqueue queue of the arrival time window in the enqueue sequence is determined as the flow. The queue of enrollment.
  • the queue resource reservation information further includes an output timing, where the output timing is used to define a switch state of each queue in each time window.
  • the sending the message by the queue scheduling includes: the network device opens a queue in which the message is located in a time window of the queue in which the packet is opened, and sends the message.
  • the embodiment of the present application provides a network device, where the network device includes the receiving module, and is configured to receive a packet.
  • the processing module is configured to identify delay-sensitive traffic that identifies that the flow to which the packet belongs is a reserved resource, where the reserved resource includes the number of packets that the flow can send in a time window. Then, the processing module is configured to send the number of existing messages in the queue of the flow according to the number of packets that can be sent by the flow in a time window and the total number of sent messages in the one time window. The message is scheduled to be sent out at a specific time window.
  • the output time window of the message in the embodiment of the present application is dynamically determined according to real-time information (for example, the case where the message is accumulated in a time window), instead of being completely determined according to the static configuration.
  • real-time information for example, the case where the message is accumulated in a time window
  • the transmission device only needs to ensure that the number of messages sent in each time window conforms to the number of messages that can be sent in a time window, without using each The sending time of the messages is constrained. That is to say, after the number of packets sent by the network device in one time window reaches the number of transmittable messages, the next message is scheduled to be sent to the next time window.
  • the network device can arrange the packets sent by the upstream device at any time in a time window to an appropriate time window, thereby avoiding the constraint on the time for the upstream device to send the message. Increase the available bandwidth of delay-sensitive traffic and reduce the waste of bandwidth resources.
  • the network device further includes a first storage module.
  • the first storage module is configured to store queue resource reservation information and traffic resource reservation information that are set in advance for sending the flow.
  • the queue resource reservation information includes a queue for sending the stream and an enqueue timing and an output timing of the queue, where the enqueue sequence is used to define an enqueue queue for each time window, The output timing is used to define the switching state of each queue within each time window.
  • the traffic resource reservation information records the current enqueue queue of the flow and the number of packets used to indicate the number of packets in the current enqueue queue; the total number of packets sent in the one time window is sent. The case is the number of packets in the current enqueue queue.
  • the processing module sends the message according to the number of messages that can be sent by the stream in a time window and the number of messages that have been sent in the one time window, in a specific time window, specifically
  • the processing module determines an arrival time window of the packet, where the arrival time window is a time window of the egress port of the network device when the packet arrives; and querying the number of packets in the current enqueue queue; Then, the processing module determines, according to the arrival time window, the enqueue timing of the queue, the number of packets that the flow can send in a time window, and the number of packets in the current enqueue queue. And the queue is added to the queue of the queue in which the packet is located, and the queue in which the packet is located is opened in the time window of the queue in which the packet is opened. The message is sent.
  • the delay sensitive traffic since each time window can send a message according to the required number, in terms of results, the delay sensitive traffic still has a promiseable end-to-end delay. That is to say, the present embodiment can control the total delay of the packet in the transmission process by defining the enqueue timing and the output timing based on the time window, and does not need to strictly control the delay in each network device by constraining the transmission time. It can ensure that the delay sensitive traffic has a promiseable end-to-end delay, and can increase the available bandwidth of the delay sensitive traffic and reduce the waste of bandwidth resources.
  • the processing module is configured according to the arrival time window, the enqueue timing of the queue, the number of packets that the flow can send in a time window, and the current enqueue queue. Determining the enqueue queue of the flow, the method includes: processing, in the case that the number of packets in the current enqueue queue has reached the number of packets that can be sent in a time window, the arrival time is The enqueue queue of the next time window of the window is determined to be the enqueue queue of the flow; or, in the case that the number of packets of the current enqueue queue does not reach the number of messages that can be sent in one time window, The enqueue queue of the arrival time window is determined as the enqueue queue of the flow.
  • the number of packets that can be sent in one time window reaches the number of packets that can be sent, and the next packet is sent to the next time window for transmission, which increases flexibility.
  • it can also ensure that the packets sent in the time window meet the requirements of the number of packets sent in the time window, and meet the traffic characteristic requirements of the delay sensitive traffic.
  • the network device further includes a resource reservation module, where the resource reservation module is configured to reserve resources for the stream in advance, and configure the method in the process of reserving resources. Traffic resource reservation information.
  • the network device further includes a second storage module, where the second storage module is configured to store pre-configured queue resource reservation information for sending the flow and a traffic resource reservation.
  • the queue resource reservation information includes a queue corresponding to the flow and a dequeue gating configured for the queue, and the dequeue gating is used to control a report sent in each time window.
  • the number of the traffic resource reservation information includes the number of packets that the stream can send in a time window;
  • the processing module sends the message according to the number of messages that can be sent by the flow in a time window and the situation that the message is accumulated in the one time window, and the message is sent out in a specific time window, including
  • the processing module adds the packet to a queue corresponding to the flow to which the packet belongs; according to the dequeue gating, the packet is sent from the queue corresponding to the flow for sending, and the dequeue is sent.
  • the gating is updated according to the time window; the initial value of the dequeue gating in each time window is the number of messages that can be sent by the flow corresponding to the queue in a time window, and is sent according to each time window.
  • the number of messages is decremented.
  • the upstream device since the time limit on the time queue is not performed, but the message sent in each time window is controlled during the dequeue process, there is no requirement for the time of receiving the message, and no The generation time constraint of the upstream device enables the upstream device to transmit the delay-sensitive traffic to increase the available bandwidth of the delay-sensitive traffic in almost the entire time window, thereby reducing the waste of bandwidth resources.
  • the network device further includes a second resource reservation module, where the second resource reservation module is configured to reserve resources in advance for the flow, and process the reserved resource.
  • the traffic resource reservation information and the queue resource reservation information are configured in the middle.
  • the queue resources are allocated according to the flow, instead of being allocated based on the time window, the time window alignment of the entire network is not required, and the device can be deployed on a device that does not support time alignment, and the applicability is expanded.
  • an embodiment of the present application provides a message sending system.
  • the system includes a network control plane and at least one of the network devices, and after receiving the traffic request request sent by the source device, the network control plane sends a traffic request to the at least one network device on the path where the flow is located.
  • the notification includes the information about the flow of the application resource reservation; the network device is configured to perform resource reservation configuration on the flow according to the information of the flow in the notification, receive the message, and identify and identify the The flow to which the message belongs is a delay-sensitive traffic of the reserved resource, and the reserved resource includes the number of packets that the flow can send in a time window.
  • the network device is configured to send, according to the number of packets that the flow can send in a time window, and the number of packets that are sent in the queue in the one time window, the number of existing packets in the queue for sending the flow will be The message is scheduled to be sent out at a specific time window.
  • the output time window of the message in the embodiment of the present application is dynamically determined according to real-time information (for example, the case where the message is accumulated in a time window), instead of being completely determined according to the static configuration.
  • real-time information for example, the case where the message is accumulated in a time window
  • the transmission device only needs to ensure that the number of messages sent in each time window conforms to the number of messages that can be sent in a time window, without using each The sending time of the messages is constrained. That is to say, after the number of packets sent by the network device in one time window reaches the number of transmittable messages, the next message is scheduled to be sent to the next time window.
  • the network device can arrange the packets sent by the upstream device at any time in a time window to an appropriate time window, thereby avoiding the constraint on the time for the upstream device to send the message. Increase the available bandwidth of delay-sensitive traffic and reduce the waste of bandwidth resources.
  • the network device is further configured to perform any of the various possible implementations of the first aspect described above.
  • an embodiment of the present application provides a network device, where the network device includes a processor, the processor is coupled to a memory, and when the processor executes the program on the memory, implementing the first aspect or Any of the various possible implementations of the first aspect.
  • the embodiment of the present application provides a computer readable storage medium, where the program storage code is stored in the computer storage medium, and the program code is used to indicate execution of the first aspect or various possible implementations based on the first aspect. Any of the methods.
  • FIG. 1 is a schematic diagram of a packet enqueue and dequeue sequence according to an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a message transmission system according to an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for reserving a traffic resource according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of scheduling priorities according to an embodiment of the present disclosure.
  • FIG. 5 is a flowchart of a method for a network device to send a packet according to an embodiment of the present disclosure
  • FIG. 6 is a flowchart of a method for sending a packet by another network device according to an embodiment of the present disclosure
  • FIG. 7 is a flowchart of a method for transmitting a packet by another network device according to an embodiment of the present disclosure
  • FIG. 8 is a flowchart of a packet dequeuing according to an embodiment of the present application.
  • FIG. 9 is a flowchart of a method for updating a token bucket according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a network device according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of another network device according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of still another network device according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a transmission system according to an embodiment of the present application.
  • the transmission system may include a source device 201, at least one network device 202, a destination device 203, and a network control plane 204.
  • the source device 201 may be a control host in an industrial control network scenario, or an industrial sensor, or a sensor in an Internet of Things scenario.
  • Network device 202 can be a switching device or router or the like.
  • the destination device 203 may be an actuator (eg, a servo motor in an industrial control network scenario or an information processing center in an Internet of Things scenario, etc.).
  • Network control plane 204 can be a controller or a management server.
  • a uniform time window size is set by the network control plane 204 (e.g., 125us is a time window), and the set time window size is sent to each network device 202 in the transmission system.
  • the network control plane 204 may also only send the time window size to the network device 202 participating in the delay sensitive traffic transmission.
  • the network control plane 204 can determine the network device 202 participating in the delay sensitive traffic transmission according to the network topology and the existing packet forwarding rules.
  • the network device 202 configures the boundary phase of the time window of each local port according to the time window size set by the network control plane 204.
  • the time window in each network device 202 in the transmission system is set to a time window of the same size but with the boundaries not aligned.
  • the current time window can be obtained according to the real time calculation or table lookup.
  • time windows are an important concept.
  • the time window is a continuous time.
  • the network control plane 304 divides the time of the egress port of the network device into a plurality of time window periodic cycles, such as "Time Window 1, Time Window 2, Time Window 3, Time Window 1".
  • Each time window has a certain data transmission capability according to the rate of the link. For example, for a 10 Gbit link, a 125us time window can send 1250 Kbits of data (approximately 100 1.5 KB messages) in a time window. Therefore, in the process of subsequently transmitting the message, the enqueue or dequeue of the message can be controlled according to the time window.
  • the network resource in the end-to-end path of the delay-sensitive traffic transmission needs to be reserved for the traffic resource, thereby preventing the traffic resource from being reserved.
  • This delay sensitive traffic experiences unpredictable congestion, resulting in additional queuing delays.
  • the traffic resource reservation process is shown in Figure 3.
  • the source device sends a traffic request request to the network control plane.
  • delay-sensitive traffic is generally required to apply for traffic before it is sent.
  • Traffic applications are generally applied in accordance with "several message time windows” (generally referred to as traffic characteristics) or corresponding traffic rates.
  • the traffic request request carries the information of the flow to be applied and the traffic characteristics.
  • the information of the flow may include information that can identify the flow (eg, a source address, a destination address, a destination port number, a differentiated service code point (DSCP), or a protocol).
  • DSCP differentiated service code point
  • the network control plane determines whether to accept the request according to the remaining sending capability of the corresponding outbound port in each network device on the path where the flow is located.
  • the corresponding egress port in the network device is a port in the network device used to send the flow.
  • the network control plane can determine the transmission path of the flow according to the network topology of the transmission system and the existing packet forwarding rules.
  • the transmission path includes a network device that transmits the flow, and an egress port in the network device.
  • the request fails, and the network control plane rejects the request and feeds back the failure information to the source device. If the traffic signature requested in the traffic request request is less than or equal to the current remaining transmission capability of any outbound port on the path, the request is successful, and the network control plane accepts the request and feeds back the success information to the source device, and updates the path.
  • the sending capability of the outgoing port in each network device For example, for a 125us, 10Gbps network, the flow 1 application sends 90 1.5KB messages per cycle, and the application is successful; after the flow 1 application is completed, about 10 messages are sent per time window, and then flow 2 is applied again. The time window sends 50 1.5KB messages and the request fails.
  • the sending capability of the port is the maximum number of packets sent by the port in a time window.
  • the transmission capability can be calculated by link bandwidth, window size, and maximum packet size.
  • the network control plane After receiving the traffic request request, the network control plane sends a notification that the traffic application is successful to each network device on the path where the flow is located.
  • the network control plane can use the control message to send the notification.
  • the notification includes information about the flow of the application resource reservation and the traffic characteristics.
  • the information of the flow and the traffic characteristic may be obtained from the traffic request request of step S301.
  • the network control plane sends a notification that the traffic application is successful to each network device on the path.
  • Each network device that receives the notification performs traffic resource reservation configuration on the flow.
  • the traffic resource reservation configuration mainly includes:
  • the information of the flow may be updated into a flow table of the network device.
  • the network device can identify the flow according to the flow table. Identifying the flow may employ methods such as parsing the source, destination IP, port number, DSCP, or protocol of the flow.
  • the traffic resource reservation information includes the traffic feature (that is, the number of packets that the flow can send in a time window).
  • the network device can transmit the flow according to the information configured in the resource reservation process.
  • the network device In the process of transmitting a packet in the embodiment of the present application, for a packet with delay-sensitive traffic, the network device will use the number of packets that can be sent in a time window and the total number of packets sent in a time window. The message is scheduled to be sent in a specific time window.
  • the output time window of the message is dynamically determined according to real-time information (for example, the case where the message is accumulated in a time window), instead of being completely determined according to the static configuration.
  • the flexibility in the process of sending a message can be increased, and the transmission device only needs to ensure that the number of messages sent in each time window conforms to the number of messages that can be sent in a time window, without using each The sending time of the messages is constrained. That is to say, after the number of packets sent by the network device in one time window reaches the number of transmittable messages, the next message is scheduled to be sent to the next time window.
  • the network device can set the upstream device in a time window by adopting the scheme of scheduling the transmission time window based on the number of packets that can be sent in a time window and the cumulative transmission of packets in a time window. Packets sent at any time point are scheduled to be sent to a suitable time window, which avoids the constraint on the time when the upstream device sends packets, and allows the upstream device to send delay-sensitive traffic in almost the entire time window. Moreover, the transmission time window of the message is determined according to the number of messages that can be sent in one time window and the total number of messages sent in a time window, and also ensures that the message sent in each time window satisfies the time window.
  • the number of messages sent within the packet meets the traffic characteristics requirements of the delay-sensitive traffic.
  • delay-sensitive traffic still has a promiseable end-to-end delay. Therefore, the dynamic scheduling mechanism used in the embodiments of the present application can improve the delay-sensitive traffic with a promiseable end-to-end delay and the output traffic of each network device in the network still meets the traffic characteristics.
  • the available bandwidth of delay-sensitive traffic reduces the waste of bandwidth resources.
  • each port is provided with a queue for buffering messages. After the packet enters the network device, it first enters the cache queue of the egress port, and then dequeues according to the scheduling mechanism of the queue and sends it.
  • network devices usually use different levels of queuing mechanisms. That is to say, one port in the network device can set multiple different levels of cache queues.
  • dequeue scheduling high priority queues are prioritized. As shown in FIG. 4, FIG. 4 is a schematic diagram of scheduling priorities. In Figure 4, queue 1, queue 2, and queue 3 are the highest priority queues, and queues 4 through 8 are low priority queues. When queue 2 and queue 4 to queue 8 are both open, the network device prioritizes the queue.
  • the message in 2 is sent, and then the packets in queue 4 to queue 8 are scheduled.
  • the network device reserves a queue for sending delay-sensitive traffic on the outbound port of the delay-sensitive traffic. These queues are usually the ones with the highest priority, such as Queue 1, Queue 2, and Queue 3 in Figure 4.
  • the network device usually includes two processes in the process of sending a packet, one is to join the received packet into the queue enqueue process, and the other is to dequeue the packet from the queue. .
  • Option 1 Mainly involves the improvement of the enqueue process of the message.
  • the enqueue of the message can be controlled based on the number of messages that can be sent in one time window and the cumulative transmission of messages in a time window.
  • the network device can dynamically determine the enqueue queue of the flow to which the packet belongs based on the number of packets that can be sent in a time window and the cumulative transmission of packets in a time window, and update the enqueue queue of the pre-configured stream.
  • Option 2 Mainly involves the improvement of the dequeue process of the message.
  • the dequeue of the message can be controlled based on the number of messages that can be sent in one time window and the situation in which the message is sent in a time window.
  • the network device can set the outbound gate according to the number of messages that can be sent in a time window, and control the number of packets that are dequeued within a time window by the outbound gate.
  • the network device reserves at least three queues for delay-sensitive traffic in advance, and configures the enqueue timing and output timing of the queues based on the time window.
  • the enqueue timing refers to the timing at which each queue becomes the enqueue queue.
  • the queue is the queue that the packet can enter.
  • the time window based enqueue timing is used to define the enqueue queue corresponding to each time window.
  • the enqueue timing is used during the message enqueue phase to determine the enqueue queue for messages within each time window.
  • the output timing refers to the opening timing of each queue during the dequeue scheduling phase.
  • the time window based output timing is used to define the switching state of each queue within each time window.
  • the output timing is used during the message dequeue phase to control the switching of each queue in each time window.
  • the enqueue timing and output timing can be stored in a table structure or in other storage structures (eg, arrays, etc.). As shown in Table 1 and Table 2:
  • Time window value Entry queue Alternative enqueue queue Time window KN+1 Queue 2 Queue 3 Time window KN+2 Queue 3 Queue 4 Time window KN+3 Queue 4 Queue 5 ?? .... .... Time window KN+K-1 Queue K Queue 1 Time window KN+K Queue 1 Queue 2
  • Time window KN+1 Queue 1 is open Queue 2 is closed Queue 3 is closed whereas Queue K is closed Other queues open Time window KN+2 Queue 1 is closed Queue 2 is open Queue 3 is closed whereas Queue K is closed Other queues open Time window KN+3 Queue 1 is closed Queue 2 is closed Queue 3 is open whereas Queue K is closed Other queues open whereas Time window KN+K Queue 1 is closed Queue 2 is closed Queue 3 is closed whereas Queue K opens Other queues open .
  • Table 1 stores the enqueue timing.
  • Table 2 stores the output timing.
  • N in Table 1 and Table 2 is an integer greater than or equal to 0, and K is the number of queues for transmitting delay-sensitive traffic, and is an integer greater than or equal to 3.
  • each time window has an enqueue queue and an alternate enqueue queue.
  • the message of the stream candidate can be placed in the candidate for this time window.
  • the candidate enqueue queue of the previous time window can be set as the enqueue queue of the next time window, so that the excess message of the previous time window can be put into the enqueue queue of the next time window.
  • the enqueue queue of the Mth time window is in an open state in the M+1th time window, and is in an off state in other time windows, and M is an integer greater than or equal to 1.
  • queue 1 to queue K are high priority queues for transmitting delay sensitive traffic, and other queues are lower priority queues for transmitting other traffic.
  • Table 1 and Table 2 are only an example.
  • the queue entry queue may not be configured with an alternate enqueue queue, and the number of packets in the enqueue queue of the previous time window may reach a time window. When the number of sent packets is directly placed in the queue of the next time window.
  • queue resource reservation may be completed at any time before the traffic resource reservation shown in FIG. 3 (for example, may be completed in the transmission system initialization phase), or may be used in the first traffic resource reservation phase of the queue resource to be used. carry out.
  • the network device further configures, in the traffic resource reservation information configured in step S304, an enqueue queue entry for recording the enqueue queue of the flow, and is used for counting the packets in the enqueue queue.
  • Message count item The packet count recorded in the packet count item may indicate the number of packets in the enqueue queue of the flow.
  • the initial enqueue queue of the flow configured in the enqueue queue entry is the enqueue queue of the first time window in the enqueue sequence.
  • the enqueue queue of the flow and the number of packets in the enqueue queue are flow status information, which can be updated according to the real-time status during the message transmission process.
  • the queue entry item can be updated according to the enqueue queue determined during the message sending process.
  • the traffic resource reservation information may be stored in a table structure or may be stored in other storage structures (eg, arrays, etc.). Its structure is shown in Table 3:
  • FIG. 5 is a flowchart of a method for a network device to send a packet in the first embodiment.
  • 5a-5e is the enqueue process of the packet
  • 5f-5h is the dequeue process of the packet.
  • the network device receives the message from the upstream device.
  • the network device After receiving the packet, the network device determines the egress port of the packet.
  • the method for determining the port can be implemented by using the prior art, and details are not described herein again.
  • the network device identifies whether the flow to which the packet belongs is delay-sensitive traffic.
  • the network device can parse the source, destination IP, port, DSCP, protocol number, and other information inside the packet header, and use the parsed information to search in the information recorded in the resource reservation process to identify the packet. Whether the stream to which it belongs is delay-sensitive traffic. In this embodiment, the flow to which the packet belongs is a delay-sensitive traffic. For the packets with non-delay-sensitive traffic, the network device puts the packets into the queues of the lower-priority, and the process of the enqueue and the scheduling process is not described in the embodiment.
  • the network device determines an arrival time window of the packet.
  • the arrival time window is a time window of the outgoing port of the packet in the network device when the packet arrives at the network device.
  • the network device may first obtain the current time when the packet is received.
  • the network device can obtain the current time according to the clock crystal in the transmission system, and can also obtain the current time by using the time information contained in the content of the message.
  • the network device performs an operation or a table lookup based on the port number of the egress port determined in 5a and the current time to obtain the current time window of the port.
  • the calculation or table lookup configuration can be configured when the transmission system is initialized.
  • the network device determines the number of packets that can be sent in a time window according to the arrival time window of the packet, the enqueue sequence of the queue, and the number of packets in the current enqueue queue. queue.
  • the network device can obtain the number of packets that can be sent in a time window from the traffic characteristics recorded in the traffic resource reservation information, and obtain the current enqueue queue from the packet count in the traffic resource reservation information. The number of messages.
  • the network device determines the enqueue queue of the next time window of the arrival time window as the inflow of the flow. Team queue. If the number of packets in the current enqueue queue does not reach the number of packets that can be sent in one time window, the enqueue queue of the arrival time window is determined as the enqueue queue of the flow.
  • the above determination method is a processing method in which no candidate enqueue queue is set for the time window in the enqueue timing.
  • the network device selects the alternate enqueue queue of the previous time window as the enqueue queue of the next time window, as shown in Table 1.
  • the process of determining the enqueue queue of the flow specifically includes: when the number of packets of the current enqueue queue has reached the number of packets that can be sent in a time window, the network device determines the candidate enqueue queue of the arrival time window. Enter the queue for this stream. If the number of packets in the current enqueue queue does not reach the number of packets that can be sent in one time window, the enqueue queue of the arrival time window is determined as the enqueue queue of the flow.
  • the network device can update the enqueue queue of the stream recorded in the network device according to the determined enqueue queue.
  • the update is specifically performed to update when the determined enqueue queue changes relative to the recorded enqueue queue.
  • the network device After updating the enqueue queue in the traffic resource reservation information, the network device restores the number of packets recorded in the traffic resource reservation information to an initial value.
  • the initial value can be set to zero, and the subsequent accumulation is performed.
  • the value of the accumulated record is the number of packets in the current enqueue queue.
  • the initial value can also be set to any value, and then decremented. The difference between the initial value and the value of the decremented record is the number of packets in the current enqueue queue.
  • the network device adds the message to the determined enqueue queue of the flow.
  • the network device After the network device adds the packet to the determined enqueue queue of the flow, it further accumulates the number of packets recorded in the traffic resource reservation information.
  • the way in which the accumulation can be accumulated can also be decremented. When the initial value of the message number is set to zero, the accumulation is performed in an accumulated manner. When the initial value of the number of messages is set to an arbitrary value, the accumulation is performed in a decreasing manner.
  • the network device obtains the time window of the port.
  • step 5c For the method for the network device to obtain the time window of the port, refer to step 5c, which is not described here.
  • the network device determines the queue in which the current time window is open according to the output timing.
  • the current time window here is the time window obtained in step 5g. Assuming that the current time window is time window 2, in the output timing shown in Table 2, the open queue is queue 2 and other lower priority queues.
  • the packets in the high priority queue in the open queue are preferentially scheduled to be sent.
  • the priority scheduling can be referred to FIG. 4, and details are not described herein again.
  • the queue in which the message received in step 5a is located will be opened in the time window for opening the queue defined in the output sequence, and then sent out.
  • the enqueue timing of the queue is based on the time window, and the dequeue of the message is also scheduled based on the time window. That is to say, the enqueue queue of the flow is determined, and the output time window of the message added to the enqueue queue is determined. Therefore, by the above scheme of dynamically determining the enqueue queue of the stream, each received message can be scheduled to be sent to a specific time window.
  • the packet of the previous time window can be flexibly adjusted to the enqueue queue of the next time window, and the static configuration enqueue queue is broken upstream.
  • the constraint of the sending time of the device enables the upstream device to send the delay sensitive traffic to increase the available bandwidth of the delay sensitive traffic in almost the entire time window, thereby reducing the waste of bandwidth resources.
  • the network device receives the message in the local Nth time window. Issued in the local N+1 (or N+2) time window, this ensures that the end-to-end delay maximum is ((end-to-end hops +1)* (time window size)) + path The sum of the time window boundary differences on the upper. Therefore, the embodiment of the present application can improve the available bandwidth of the delay sensitive traffic, and can also ensure that the delay sensitive traffic has a promiseable end-to-end delay. Moreover, the solution provided by the embodiment of the present application does not need to require time window alignment of the entire network, and can be deployed on a device that does not support time alignment, and expands the applicability.
  • step 5d shown in FIG. 5 will be described in detail below with reference to FIG. 6.
  • FIG. 6 is a flowchart of a method for determining an enqueue queue in Embodiment 1. The method includes:
  • step S601. The network device determines whether the arrival time window of the message is the same time window as the arrival time window of the previous message. If it is the same time window, step S602 is performed, and if it is not the same time window, step S606 is performed.
  • the network device determines whether the packet count in the traffic resource reservation information is equal to the number of packets that can be sent in a time window. If the packet count is equal to the number of transmittable packets, it is determined that the enqueue queue of the flow is an alternate enqueue queue of the arrival time window, and S603-S605 is performed; if the packet count is not equal to the number of transmittable packets, It is determined that the enqueue queue of the flow is the enqueue queue of the arrival time window, and S604-S605 is performed.
  • the initial value of the packet count in the traffic resource reservation information is zero, and the method of accumulating the count plus one every time a message is added is taken as an example.
  • the number of packets in the traffic resource reservation information can be directly compared with the traffic characteristics of the stream (that is, the number of packets that the stream can send in a time window).
  • This embodiment is described by taking an example of an optional enqueue queue in the enqueue schedule.
  • the network device updates the enqueue queue recorded in the traffic resource reservation information to an optional enqueue queue of the arrival time window of the packet in the enqueue schedule, and counts the packet in the traffic resource reservation information. Cleared.
  • Method 1 First update the enqueue queue recorded in the traffic resource reservation information, and then enter the queue according to the recorded enqueue queue. It should be noted that, for the case that no update is required (that is, the determined enqueue queue has no change compared to the recorded enqueue queue), the enqueue is directly performed according to the recorded enqueue queue.
  • Method 2 Directly enter the team according to the queue.
  • the operation of the enqueue queue recorded in the update traffic resource reservation information may be performed before the enqueue or after the enqueue, and the inevitable relationship between the two is inferred.
  • This embodiment is described by taking the first method as an example.
  • S605 The network device adds 1 to the packet count in the traffic resource reservation information.
  • the network device determines whether the arrival time window of the packet is the next time window of the arrival time window of the previous message, and if it is the next time window of the arrival time window of the previous message, executing S607, if not S608 is executed in the next time window of the arrival time window of a message.
  • step S606 is a negative branch of step S601 if the arrival time window of the message is not the next time window of the arrival time window of the previous message, then it is after the next time window of the arrival time window of the previous message.
  • Time window in this case, the message is also the first message in its arrival time window. That is to say, as long as the arrival time window of the message is not in the same time window as the arrival time window of the previous message, the message is the first message in its arrival time window.
  • S607. Determine whether the enqueue queue in the traffic resource reservation information is an enqueue queue of the arrival time window of the packet in the enqueue sequence. If the enqueue queue is in the enqueue sequence, execute S602, if not, enter In the queue queue in the team timing, S608 is executed.
  • the enqueue queue in the traffic resource reservation information is the enqueue queue of the arrival time window of the packet in the enqueue sequence, it indicates that the last time window of the arrival time window of the packet uses the standby enqueue queue. If the enqueue queue in the traffic resource reservation information is not the enqueue queue of the arrival time window of the packet in the enqueue sequence, it indicates that the previous enqueue queue is not used in the previous time window of the arrival time window of the packet.
  • the network device updates the enqueue queue recorded in the traffic resource reservation information to the enqueue queue corresponding to the arrival time window, and clears the packet count in the traffic resource reservation information, and then continues to execute S604-S605.
  • the network device also needs to perform queue resource reservation and traffic resource reservation.
  • the queue resource reserved in the embodiment is different in the queue resource reservation manner in the first embodiment.
  • the network device configures a one-to-one queue for each delay-sensitive traffic. That is to say, the queues in this embodiment are allocated based on streams instead of time windows. Therefore, there is also no enqueue timing and output timing based on the time window in this embodiment.
  • the network device also gates out of the queue for each flow. The dequeue gate is used to control the number of messages sent in each time window. Therefore, in addition to the correspondence between the flow and the queue of the flow, the queue resource reservation information of the embodiment includes the dequeue gating of the queue of each flow.
  • the foregoing queue resource reservation in this embodiment may be performed when the traffic resource is reserved, or may be performed before the traffic resource reservation.
  • FIG. 7 is a flowchart of a method for a network device to send a packet in Embodiment 2.
  • 7a-7c is the enqueue process of the message
  • 7d is the dequeue process of the message.
  • the network device receives the message from the upstream device.
  • the network device identifies whether the flow to which the packet belongs is delay-sensitive traffic.
  • step 5b For the specific implementation of this step, refer to step 5b in the embodiment shown in FIG. 5, and details are not described herein again.
  • This embodiment also uses the flow to which the packet belongs as the delay sensitive traffic as an example.
  • the network device puts the packets into the queues of the lower-priority, and the process of the enqueue and the scheduling process is not described in the embodiment.
  • the network device After the network device recognizes that the flow is delay sensitive traffic, the network device adds the packet to the queue corresponding to the flow to which the packet belongs.
  • the network device sends the packet according to the dequeue gating from the queue corresponding to the flow for sending.
  • the network device checks the queue of the flow and the dequeue in real time.
  • the dequeue gate is updated according to the time window.
  • the initial value of the dequeue gating in each time window is the number of messages that the corresponding stream can transmit in a time window, and is decremented according to the number of messages sent in each time window.
  • the dequeue gating can be implemented by using a token bucket.
  • the update to the dequeue gate is to update the number of tokens in the token bucket.
  • step 7d is as shown in FIG. 8 .
  • FIG. 8 is a flowchart of scheduling a message to be dequeued, and the process includes:
  • Step S801 the network device checks the packets in the queues of the delay-sensitive traffic and the tokens in the token bucket in real time to determine whether any queues satisfy the queue is non-empty and the tokens exist in the token bucket of the queue.
  • Step S802 is executed to extract the message transmission. If it does not exist, execute S803.
  • S802 The network device extracts the packet from the queue for sending, and reduces the number of tokens in the root token bucket by one, and returns to S801. S803.
  • the network device schedules a packet in a lower priority queue to be sent.
  • the step S803 can be implemented by using the prior art, for example, according to the priority scheduling or the polling scheduling, scheduling the queue dequeue corresponding to the non-delay sensitive traffic to send the packet, which will not be described here.
  • FIG. 9 is a flowchart of a method for updating a token bucket. The method includes:
  • the network device acquires a time window of an egress port that transmits delay-sensitive traffic.
  • step 5a For the method of obtaining the time window of the port, refer to step 5a shown in FIG. 5, and details are not described herein again.
  • S902 The network device determines whether the time window is updated. If the time window is updated, executing S903, and if the time window is not updated, returning to S901.
  • the judging whether the time window is updated is compared with the time window when the token bucket was last updated.
  • the network device acquires the traffic characteristics in the traffic resource reservation information of each flow (that is, the number of packets that each flow can send in a time window).
  • the network device updates the number of tokens in the token bucket of the queue of each stream to the number of packets that each stream can send in a time window.
  • the dequeue gate since the dequeue gate is updated according to the number of messages that can be sent in a time window, and the value of the dequeue gate implies the number of packets that have been sent in a time window, therefore, In this embodiment, the message is also scheduled to be sent out in a specific time window based on the number of messages that can be sent in a time window and the cumulative transmission of messages in a time window.
  • this solution does not limit the time window of the enqueue, but controls the sending of messages in each time window during the dequeuing process, there is no requirement for the time of receiving the message, nor is it for the upstream device.
  • the generation time constraint of the sending time enables the upstream device to send the delay sensitive traffic to increase the available bandwidth of the delay sensitive traffic in almost the entire time window, thereby reducing the waste of bandwidth resources.
  • the embodiment of the present application can improve the available bandwidth of the delay sensitive traffic, and can also ensure that the delay sensitive traffic has a promiseable end-to-end delay.
  • the solution provided by the embodiment of the present application does not need to require time window alignment of the entire network, and can be deployed on a device that does not support time alignment, and expands the applicability.
  • the network device 1000 and the network device 1100 involved in the foregoing embodiments shown in FIG. 1 to FIG. 9 will be described below with reference to the accompanying drawings.
  • the network device 1000 is applied in the embodiment described above with respect to Figures 2-6.
  • the network device 1100 is applied in the embodiments described above with reference to Figures 2-3 and Figures 7-9, which are described below.
  • FIG. 10 is a schematic structural diagram of a network device 1000 according to an embodiment of the present application.
  • the network device 1000 includes a receiving module 1002 and a processing module 1004.
  • the receiving module 1002 is configured to receive a message; the specific detailed processing function of the receiving module 1002 or the steps that can be performed may refer to the detailed description in 5a in the foregoing embodiment shown in FIG. 5 .
  • the processing module 1004 is configured to identify a delay-sensitive traffic that identifies that the flow to which the packet belongs is a reserved resource, where the reserved resource includes a packet that can be sent by the flow in a time window.
  • the detailed detailed processing function or the steps that can be performed by the processing module 1004 can be referred to the detailed description in 5b-5h in the embodiment shown in FIG. 5 above, and the detailed description in S601-S608 in FIG. 6.
  • the network device further includes a first storage module 1006.
  • the first storage module 1006 is configured to store preset resource reservation information and traffic resources for sending the flow. Reserving information; the queue resource reservation information includes a queue for transmitting the stream, and an enqueue timing and an output timing of the queue, where the enqueue timing is used to define an enqueue queue for each time window. The output timing is used to define a switch state of each queue in each time window; the current resource reservation information records the current enqueue queue of the flow and the report used to represent the current enqueue queue The number of packets in the queue is the number of packets in the current enqueue queue;
  • the processing module sends the message according to the number of messages that can be sent by the stream in a time window and the number of messages that have been sent in the one time window, in a specific time window, specifically include:
  • the processing module determines an arrival time window of the packet, where the arrival time window is a time window of the outbound port of the network device when the packet arrives; and querying the number of packets in the current enqueue queue;
  • the arrival time window, the enqueue timing of the queue, the number of packets that the flow can send in a time window, and the number of packets in the current enqueue queue determine the enqueue queue of the flow;
  • the message is added to the determined queue of the flow of the flow; the time window of the queue in which the message is opened is defined in the output sequence to open the queue in which the message is located, and the message is sent.
  • step S304 for detailed details of the first storage module 1006, reference may be made to the detailed description of step S304 in the embodiment shown in FIG. 3 above, and the detailed description of the above table 1-3 and its corresponding text portions.
  • the detailed detailed processing function or the steps that can be performed by the processing module 1004 can be referred to the detailed description in 5b-5h in the embodiment shown in FIG. 5 above, and the detailed description in S601-S608 in FIG. 6.
  • the network device further includes a first source reservation module 1008, where the first resource reservation module 1008 is configured to reserve resources for the stream in advance, in the reservation.
  • the traffic resource reservation information is configured in the process of resources.
  • step S304 For detailed detailed processing functions or steps that can be performed by the first resource reservation module 1008, reference may be made to the detailed description of step S304 in the embodiment shown in FIG. 3 above, and the detailed description of the above table 1-3 and its corresponding text portions.
  • FIG. 11 is a schematic structural diagram of a network device 1100 according to an embodiment of the present application.
  • the network device 1100 includes a receiving module 1102 and a processing module 1104.
  • the receiving module 1102 is configured to receive a message; the specific detailed processing function of the receiving module 1102 or the steps that can be performed may refer to the detailed description in 7a in the foregoing embodiment shown in FIG. 7.
  • the processing module 1104 is configured to identify a delay-sensitive traffic that identifies that the flow to which the packet belongs is a reserved resource, where the reserved resource includes a packet that can be sent by the flow in a time window.
  • the network device further includes a second storage module 1106, where the second storage module 1106 is configured to store pre-configured queue resource reservation information and traffic resources for sending the flow.
  • Reserving information the queue resource reservation information includes a queue corresponding to the flow and a dequeue gating configured for the queue, and the dequeue gating is used to control sending in each time window The number of packets;
  • the traffic resource reservation information includes the number of packets that the stream can send in a time window;
  • the processing module 1104 sends the message according to the number of messages that can be sent in the current window and the situation in which the message is accumulated in the one time window, and the message is scheduled to be sent in a specific time window.
  • the processing module 1104 adds the packet to a queue corresponding to the flow to which the packet belongs, and sends the packet according to the dequeued gating from the queue corresponding to the flow for sending, the dequeue
  • the gating is updated according to the time window; the initial value of the dequeue gating in each time window is the number of messages that can be sent by the flow corresponding to the queue in a time window, and is sent according to each time window. The number of messages is decremented.
  • step S304 in the embodiment shown in FIG. 3 above.
  • the network device further includes a second resource reservation module 1108, where the second resource reservation module 1108 is configured to reserve resources for the stream in advance, where the reserved resource is reserved.
  • the traffic resource reservation information and the queue resource reservation information are configured in a process.
  • the device embodiments described above are merely illustrative.
  • the division of the modules is only a logical function division, and the actual implementation may have another division manner, such as multiple modules, units, or Components can be combined or one of the modules can be further divided into different functional modules.
  • the function of the first resource reservation module in the network device in the above embodiment may also be combined with the processing module in one module.
  • the coupling or communication connection between the modules or devices shown or described in the figures may be indirect coupling or communication connections through some interfaces, devices or units, or electrical, mechanical or otherwise. The form of coupling or connection.
  • Modules in which the components are described as separate components may be physically separate or physically located in the same physical component.
  • the component named by the module may be a hardware unit, a software module or a logic unit, or a combination of hardware and software.
  • the module may be located in one network element or may be distributed to multiple network elements. These may select some or all of the units according to actual needs to achieve the purpose of the solution of the embodiment.
  • FIG. 12 is a schematic structural diagram of a network device 1200 involved in the embodiment of the present application.
  • the network device 1200 can be applied in the embodiments shown in Figures 2-9 above.
  • the functions or operational steps of the network device are implemented by one or more processors in a general purpose computer or server by executing program code in the memory.
  • the network device 1200 includes a transceiver 1210, a processor 1220, a random access memory 1240, a read only memory 1250, and a bus 1260.
  • the processor 1220 is coupled to the transceiver 1210, the random access memory 1240, and the read only memory 1250 via a bus 1260.
  • the processor 1220 can be a general purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the program of the present invention.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • Bus 1260 can include a path for communicating information between the components described above.
  • the transceiver 1210 is configured to communicate with other devices or communication networks, such as an Ethernet, a radio access network (RAN), a wireless local area network (WLAN), etc., in the embodiment of the present invention, the transceiver 1110 can be used for Communicate with network control planes, source devices, or other network devices.
  • RAN radio access network
  • WLAN wireless local area network
  • the random access memory 1240 can load application code that implements the network devices in the embodiment shown in FIGS. 2-6 and is controlled by the processor 1220 for execution.
  • the random access memory 1240 can load application code that implements the network device in the embodiment described in FIGS. 2-3 and 7-9, and is controlled by the processor 1220. carried out.
  • booting is initiated by the bootloader boot system in the basic input/output system or embedded system in the read-only memory 1250, and the network device 1200 is booted into a normal operating state.
  • the processor 1220 runs the application program and the operating system in the random access memory 1240, so that the network device 1200 can perform the functions and operations in the embodiments shown in FIG. 2-6, respectively. Or the functions and operations in the embodiments shown in Figures 2-3 and 7-9.
  • the interaction with the network control plane or other network device or source device is completed by the transceiver 1210 under the control of the processor 1220, and the internal processing of the network device 1200 is completed by the processor 1120.
  • the present embodiment may also virtualize a virtual network implemented by the NFV technology based on a physical server and a network function.
  • the device, the virtual network device can be a virtual switch or a router or other forwarding device.
  • Those skilled in the art can use the NFV technology to virtualize a plurality of network devices having the above functions on a physical server by reading the present application. I will not repeat them here.
  • the embodiment of the present invention further provides a computer storage medium for storing computer software instructions used by the network device, which includes functions for performing the functions of the network device in the embodiment shown in FIG. 2-6. program.
  • An embodiment of the present invention further provides another computer storage medium for storing computer software instructions for use in the network device, which is included in the embodiment shown in FIG. 2-3 and FIG. 7-9.
  • the program involved in the function of the network device is included in the embodiment shown in FIG. 2-3 and FIG. 7-9.
  • embodiments of the present application can be provided as a method, apparatus (device), or computer program product. Therefore, the embodiments of the present application may take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program is stored/distributed in a suitable medium, provided with other hardware or as part of the hardware, or in other distributed forms, such as over the Internet or other wired or wireless telecommunication systems.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例涉及一种报文发送的方法,在该方法中,网络设备接收报文,该网络设备识别出所述报文所属的流为已预留资源的时延敏感流量,所述已预留资源中包括所述流在一个时间窗内可发送的报文数;然后,所述网络设备根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况用于发送所述流的队列中已存在的报文数将所述报文安排在特定的时间窗发出。该方法可以增加延时敏感流量的使用带宽,提高带宽利用率。

Description

一种发送报文的方法、设备和系统 技术领域
本申请实施例涉及网络传输领域,特别涉及一种发送报文的方法、设备和系统。
背景技术
时延敏感网络一般指应用在工业控制等特殊领域的通信网络,一般来说该种网络对特定流量的发送端到接收端的端到端的时延有上限要求。如果报文到达目的地的时间晚于承诺时间,报文可能失去了时效性而变为无效。
一般来说,对于时延敏感流量,需要在其所处的网络的端到端路径上的各个节点,端口等层面为该流量预留一定资源,从而防止其在传输过程中发生不可预测的拥塞,产生额外的排队时延。
目前现有技术采用全局时钟同步的方法来发送时延敏感流量。该方法首先对全网所有节点的时钟有严格的同步要求,其次整个网络维护统一的时间窗节拍。全网所有节点上静态配置了各端口在每个时间窗的入队队列。网络设备在接收到报文后,按照当前全局时钟处在的时间窗将报文加入到相应出端口的队列中,加入的队列将在下一个时间窗被打开,报文被调度发出。
为了满足延时敏感流量的时延上限要求,上述方法在使用过程中有两个约束。约束1:上游网络设备在时间窗N发送出来的报文务必要要在下一网络设备的时间窗N内收到。约束2:当前网络设备在时间窗N内收到的报文务必要在时间窗N内进入队列。如此,在报文从源端发送到目的端的过程中,报文在所经过的每个网络设备上都能保证网络设备在第N个时间窗收到的报文,一定能在第N+1个时间窗内发送出去,并且该报文会在下一节点的第N+1个时间窗内收到。那么报文的端到端时延的最大值便是(K+1)*T,其中,K是报文在网络中经历的跳数,T是全局统一的时间窗宽度。如此,便具有可承诺的传输时延。
按照上述方法,报文的入队、出队时序如图1所示。由图1可看出,由于网络设备之间可能存在传输时延以及网络设备内部可能存在报文处理时延,因此,源端所发送报文需要经历传输时延和报文处理时延才能恰好在图1中所示的一个时间窗最末尾到达下一网络设备的队列,这就要求源端不能晚于某个时刻发送报文,否则,报文将无法保证在同一个时间窗内入队。
也就是说,为了保证延时敏感流量的具有可承诺的时延上限,源端只能使用每个时间窗内的很小的部分时间用于发送时延敏感流量。例如,假设全网统一时间窗时间为30us,网络节点相距1千米,网络节点的报文处理时延是20us,链路的传输速率是10Gbps。按上述方法,可供时延敏感流量使用的带宽将少于1.6Gbps(1Km光纤传输时延5us,在30us时间窗内有5us传输时延和20us报文处理时延无法使用,能够使用的带宽(30us-20us-5us)/30usx10Gbps=1.6Gbps)。由此可见,现有技术可供时延敏感流量使用的带宽较少,资源利用率低。
发明内容
本申请实施例提供了一种发送报文的方法、设备和系统和存储介质,可以增加延时敏感流量的使用带宽,提高带宽利用率。
第一方面,本申请实施例提供了一种发送报文的方法,该方法应用于传输系统中的网络设备,在该方法中,网络设备接收报文,并识别出所述报文所属的流为已预留资源的时延敏感流量,所述已预留资源中包括所述流在一个时间窗内可发送的报文数。然后,该网络设备根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况用于发送所述流的队列中已存在的报文数将所述报文安排在特定的时间窗发出。
本申请实施例中的报文的输出时间窗是根据实时的信息(如,一个时间窗内累计发送报文的情况)动态确定的,而不是完全按照静态配置来确定。采用这种方式,可以增加发送报文过程中的灵活性,传输设备只需保证每个时间窗内发送的报文数量符合一个时间窗内可发送的报文数就可以了,而不用对每个报文的发送时间进行约束。也就是说,网络设备可以在一个时间窗内发送的报文数量达到可发送的报文数量后,将下一报文安排到下一个时间窗来发送。因此,采用这种方式,网络设备就可将上游设备在一个时间窗内任意时间点发送的报文,安排到一个合适的时间窗发出,从而避免了对上游设备发送报文的时间产生约束,提升时延敏感流量的可利用带宽,减少带宽的资源浪费。
在一种可能的实现方式中,所述网络设备中预先配置有用于发送所述流的队列资源预留信息以及流量资源预留信息。该队列资源预留信息中包括用于发送所述流的队列以及所述队列的入队时序和输出时序,所述入队时序用于定义每个时间窗的入队队列,所述输出时序用于定义每个时间窗内各队列的开关状态。该流量资源预留信息中记录有所述流当前的入队队列以及用于表示所述当前的入队队列中的报文数的报文计数。其中,所述一个时间窗内累计发送报文的情况为所述所述当前的入队队列中的报文数。
相应的,所述网络设备根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内已累计发送的报文数将所述报文安排在特定的时间窗发出,包括:
所述网络设备确定所述报文的到达时间窗,所述到达时间窗为所述报文到达时所述网络设备时出端口的时间窗;查询当前的入队队列中的报文数;根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列;将所述报文加入到确定出的所述流的入队队列中;在所述输出时序中定义的打开所述报文所在队列的时间窗打开所述报文所在队列,将所述报文发出。
本实施例中,由于每个时间窗都能按照要求的数量来发送报文,从结果上来说,延时敏感流量也仍然具有可承诺的端到端延时。也就是说,本实施例通过定义基于时间窗的入队时序和输出时序,可以控制报文在传输过程中的总时延,不需要通过约束发送时间严格控制每个网络设备中的时延。既能保证延时敏感流量具有可承诺的端到端延时,又能提升时延敏感流量的可利用带宽,减少带宽的资源浪费。
在一种可能的实现方式中,前述网络设备根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列,具体包括:所述网络设备在所述当前的入队队列的报文数已达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的下一时间窗的入队队列确定为所述流的入队队列;或者,在所述当前的入队队列的报文数未达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的入队队列确定为所述流的入队队列。
本实施例中,通过动态的切换入队队列,可以在一个时间窗内发送的报文数量达到可发送的报文数量后,将下一报文安排到下一个时间窗来发送,增加了灵活性,在提升时延敏感流量的可利用带宽的同时,也能保证时间窗内发送的报文满足时间窗内发送报文的数量要求,符合延时敏感流量的流量特征要求。
在一种可能的实现方式中,所述入队时序中,一个时间窗还有一个备选入队队列,其中,上一个时间窗的备选入队队列为下一个时间窗的入队队列。
相应的,前述所述网络设备根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列,具体包括:
所述网络设备在所述当前的入队队列的报文数已达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的备选入队队列确定为所述流的入队队列;或者,在所述当前的入队队列的报文数未达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的入队队列确定为所述流的入队队列。
本实施例中,通过设置备选入队队列,无需在发送报文的过程中查询到达时间窗的下一时间窗的入队队列,可以提升处理效率,
在一种可能的实现方式中,所述输出时序中,第M个时间窗的入队队列在第M+1个时间窗为打开状态,在其它时间窗为关闭状态,M为大于等于1的整数。
本实施例中,通过该输出时序的控制,能保证网络设备在本地的第N个时间窗接收到的报文会在本地的第N+1(或N+2)个时间窗内发出,这样就能保证端到端的时延最大值为((端到端的跳数+1)*(时间窗大小))+路径上的时间窗边界差值之和。因此,本申请实施例可以提升时延敏感流量的可利用带宽的同时,也能更好的保证延时敏感流量的端到端时延。
在一种可能的实现方式中,所述方法还包括:所述网络设备在确定出所述流的入队队列后,根据确定的入队队列更新所述流量资源预留信息中记录的入队队列;所述网络设备在每次更新所述记录的入队队列时,将所述流量资源预留信息中记录的报文数恢复初始值,并在每次向所述更新后的入队队列加入报文的时候,对所述报文数进行累计。
本实施例中,通过记录报文数等动态信息,并直接应用到报文发送过程中,能够减少在报文发送过程中的计算,提高处理效率。
在一种可能的实现方式中,所述网络设备预先为所述流预留资源,在所述预留资源的过程中配置所述流量资源预留信息。
本实施例中通过预留资源,可以减少延时敏感流量在传输过程中的时延。
在一种可能的实现方式中,所述网络设备中预先配置有用于发送所述流的队列资源预留信息以及流量资源预留信息。该队列资源预留信息中包括与所述流一一对应的队列以及为所述队列配置的出队门控,所述出队门控用于控制在每个时间窗内发送的报文数;所述流量资源预留信息中包括所述流在一个时间窗内可发送的报文数。
相应的,所述网络设备根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况将所述报文安排在特定的时间窗发出,具体包括:
所述网络设备将所述报文加入到与所述报文所属的流对应的队列中;按照所述出队门控从所述流对应的队列中取出报文进行发送,所述出队门控按时间窗进行更新;其中,所述出队门控在每个时间窗内的初始值为所述队列对应的流在一个时间窗内可发送的报文数,并根据每个时间窗内发送的报文数量递减。
本实施例中,由于不对入队进行时间窗上的限制,而是在出队过程中来控制每个时间窗内的发送报文,因此对接收到报文的时间没有要求,也不会对上游设备的发送时间的产生约束,能够让上游设备在几乎整个时间窗内都可以发送延时敏感流量提升时延敏感流量的可利用带宽,减少带宽的资源浪费。
在一种可能的实现方式中,所述网络设备监控时间窗的更新,在每当时间窗更新时,从所述流量资源预留信息中获取所述流在一个时间窗内可发送的报文数,按照所述流在一个时 间窗内可发送的报文数更新所述出队门控。
本实施例中,通过按时间窗以及流在一个时间窗内可发送的报文数更新出队门控,从而保证每个时间窗内发送的报文满足时间窗内发送报文的数量要求,符合延时敏感流量的流量特征要求。
在一种可能的实现方式中,按照所述出队门控从所述流对应的队列中取出报文进行发送,具体包括:所述网络设备实时检查所述流对应的队列中的报文和所述令牌桶中的令牌,在所述流对应的队列中存在报文且令牌桶中存在令牌的情况下,取出报文发送,直至令牌桶中令牌为空或所述流对应的队列中的报文为空。
本实施例中,由于每个时间窗发送的报文数都能得到保证,这样就能保证端到端的时延具有可承诺的上限。因此,本申请实施例可以提升时延敏感流量的可利用带宽的同时,也能保证延时敏感流量具有可承诺的端到端延。
在一种可能的实现方式中,所述网络设备预先为所述流预留资源,在所述预留资源的过程中配置所述流量资源预留信息和队列资源预留信息。
本实施例中,由于队列资源是按流分配的,而不是基于时间窗分配的,因此,无需要求全网的时间窗对齐,可部署在不支持时间对齐的设备上,扩展了应用性。
第二方面,本申请实施例提供了一种管理报文发送的方法,该方法应用于可传输延时敏感流量的网络设备中。在该方法中,网络设备接收报文,并识别出所述报文所属的流为已预留资源的时延敏感流量。该网络设备获取所述报文的到达时间窗以及所述流的流量资源预留信息。所述流量资源预留信息中记录有所述流在一个时间窗内可发送的报文数以及当前的入队队列中报文数。所述网络设备基于所述到达时间窗,所述流在一个时间窗内可发送的报文数以及当前的入队队列中的报文数确定所述流的入队队列,将所述报文加入到所述入队队列,并通过队列调度将所述报文发出。
本申请实施例中的网络设备基于报文的到达时间窗,所述流在一个时间窗内可发送的报文数以及当前的入队队列中的报文数确定所述流的入队队列动态的将报文加入不同队列并调度输出。采用这种方式,可以增加发送报文过程中的灵活性,传输设备只需保证每个时间窗内发送的报文数量符合一个时间窗内可发送的报文数就可以了,而不用对每个报文的发送时间进行约束。也就是说,网络设备可以在一个时间窗内发送的报文数量达到可发送的报文数量后,将下一报文安排到下一个时间窗来发送。因此,采用这种方式,网络设备就可将上游设备在一个时间窗内任意时间点发送的报文,安排到一个合适的时间窗发出,从而避免了对上游设备发送报文的时间产生约束,提升时延敏感流量的可利用带宽,减少带宽的资源浪费。
在一种可能的实现方式中,所述网络设备中预先配置有用于发送所述流的队列资源预留信息以及流量资源预留信息;所述队列资源预留信息中包括用于发送所述流的队列以及所述队列的入队时序,所述入队时序用于定义每个时间窗的入队队列。所述网络设备基于所述到达时间窗,所述流在一个时间窗内可发送的报文数以及当前的入队队列中的报文数确定所述流的入队队列,包括:所述网络设备在所述当前的入队队列的报文数已达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的下一时间窗在所述入队时序中的入队队列确定为所述流的入队队列。在所述当前的入队队列的报文数未达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗在所述入队时序中的入队队列确定为所述流的入队队列。
在一种可能的实现方式中,所述队列资源预留信息中还包括输出时序,所述输出时序用于定义每个时间窗内各队列的开关状态。所述通过队列调度将所述报文发出包括:所述网络设备在所述输出时序中定义的打开所述报文所在队列的时间窗打开所述报文所在队列,将所 述报文发出。
第三方面,本申请实施例提供了一种网络设备,该网络设备包括所述接收模块,用于接收报文。该处理模块,用于识别识别出所述报文所属的流为已预留资源的时延敏感流量,所述已预留资源中包括所述流在一个时间窗内可发送的报文数。然后,所述处理模块根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况用于发送所述流的队列中已存在的报文数将所述报文安排在特定的时间窗发出。
本申请实施例中的报文的输出时间窗是根据实时的信息(如,一个时间窗内累计发送报文的情况)动态确定的,而不是完全按照静态配置来确定。采用这种方式,可以增加发送报文过程中的灵活性,传输设备只需保证每个时间窗内发送的报文数量符合一个时间窗内可发送的报文数就可以了,而不用对每个报文的发送时间进行约束。也就是说,网络设备可以在一个时间窗内发送的报文数量达到可发送的报文数量后,将下一报文安排到下一个时间窗来发送。因此,采用这种方式,网络设备就可将上游设备在一个时间窗内任意时间点发送的报文,安排到一个合适的时间窗发出,从而避免了对上游设备发送报文的时间产生约束,提升时延敏感流量的可利用带宽,减少带宽的资源浪费。
在一种可能的实现方式中,所述网络设备还包括第一存储模块。该第一存储模块用于存储有预先设置的用于发送所述流的队列资源预留信息以及流量资源预留信息。其中,所述队列资源预留信息中包括用于发送所述流的队列以及所述队列的入队时序和输出时序,所述入队时序用于定义每个时间窗的入队队列,所述输出时序用于定义每个时间窗内各队列的开关状态。所述流量资源预留信息中记录有所述流当前的入队队列以及用于表示所述当前的入队队列中的报文数的报文计数;所述一个时间窗内累计发送报文的情况为所述所述当前的入队队列中的报文数。
相应的,所述处理模块根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内已累计发送的报文数将所述报文安排在特定的时间窗发出,具体包括:所述处理模块确定所述报文的到达时间窗,该到达时间窗为所述报文到达时所述网络设备时出端口的时间窗;查询当前的入队队列中的报文数;之后,该处理模块根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列并将所述报文加入到确定出的所述流的入队队列中;然后,在所述输出时序中定义的打开所述报文所在队列的时间窗打开所述报文所在队列,将所述报文发出。
本实施例中,由于每个时间窗都能按照要求的数量来发送报文,从结果上来说,延时敏感流量也仍然具有可承诺的端到端延时。也就是说,本实施例通过定义基于时间窗的入队时序和输出时序,可以控制报文在传输过程中的总时延,不需要通过约束发送时间严格控制每个网络设备中的时延。既能保证延时敏感流量具有可承诺的端到端延时,又能提升时延敏感流量的可利用带宽,减少带宽的资源浪费。
在一种可能的实现方式中,所述处理模块根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列,具体包括:所述处理在所述当前的入队队列的报文数已达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的下一时间窗的入队队列确定为所述流的入队队列;或者,在所述当前的入队队列的报文数未达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的入队队列确定为所述流的入队队列。
本实施例中,通过动态的切换入队队列,可以在一个时间窗内发送的报文数量达到可发送的报文数量后,将下一报文安排到下一个时间窗来发送,增加了灵活性,在提升时延敏感 流量的可利用带宽的同时,也能保证时间窗内发送的报文满足时间窗内发送报文的数量要求,符合延时敏感流量的流量特征要求。
在一种可能的实现方式中,所述网络设备进一步包括资源预留模块,所述资源预留模块,用于预先为所述流预留资源,在所述预留资源的过程中配置所述流量资源预留信息。
在一种可能的实现方式中,所述网络设备进一步包括第二存储模块,所述第二存储模块,用于存储预先配置的用于发送所述流的队列资源预留信息以及流量资源预留信息,所述队列资源预留信息中包括与所述流一一对应的队列以及为所述队列配置的出队门控,所述出队门控用于控制在每个时间窗内发送的报文数;所述流量资源预留信息中包括所述流在一个时间窗内可发送的报文数;
相应的,所述处理模块根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况将所述报文安排在特定的时间窗发出,具体包括:所述处理模块将所述报文加入到与所述报文所属的流对应的队列中;按照所述出队门控从所述流对应的队列中取出报文进行发送,所述出队门控按时间窗进行更新;所述出队门控在每个时间窗内的初始值为所述队列对应的流在一个时间窗内可发送的报文数,并根据每个时间窗内发送的报文数量递减。
本实施例中,由于不对入队进行时间窗上的限制,而是在出队过程中来控制每个时间窗内的发送报文,因此对接收到报文的时间没有要求,也不会对上游设备的发送时间的产生约束,能够让上游设备在几乎整个时间窗内都可以发送延时敏感流量提升时延敏感流量的可利用带宽,减少带宽的资源浪费。
在一种可能的实现方式中,所述网络设备还包括第二资源预留模块;所述第二资源预留模块用于,预先为所述流预留资源,在所述预留资源的过程中配置所述流量资源预留信息和队列资源预留信息。
本实施例中,由于队列资源是按流分配的,而不是基于时间窗分配的,因此,无需要求全网的时间窗对齐,可部署在不支持时间对齐的设备上,扩展了应用性。
第四方面,本申请实施例提供了一种报文发送系统。所述系统包括网络控制面以及至少一个所述的网络设备,所述网络控制面在接受源端设备发送的流量申请请求后,向该流所在路径上的所述至少一个网络设备发送流量申请成功的通知,所述通知中包括申请资源预留的流的信息;所述网络设备用于根据所述通知中的流的信息对所述流进行资源预留配置,接收报文,识别识别出所述报文所属的流为已预留资源的时延敏感流量,所述已预留资源中包括所述流在一个时间窗内可发送的报文数。然后,所述网络设备根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况用于发送所述流的队列中已存在的报文数将所述报文安排在特定的时间窗发出。
本申请实施例中的报文的输出时间窗是根据实时的信息(如,一个时间窗内累计发送报文的情况)动态确定的,而不是完全按照静态配置来确定。采用这种方式,可以增加发送报文过程中的灵活性,传输设备只需保证每个时间窗内发送的报文数量符合一个时间窗内可发送的报文数就可以了,而不用对每个报文的发送时间进行约束。也就是说,网络设备可以在一个时间窗内发送的报文数量达到可发送的报文数量后,将下一报文安排到下一个时间窗来发送。因此,采用这种方式,网络设备就可将上游设备在一个时间窗内任意时间点发送的报文,安排到一个合适的时间窗发出,从而避免了对上游设备发送报文的时间产生约束,提升时延敏感流量的可利用带宽,减少带宽的资源浪费。
在一个可能的实现方式中,所述网络设备还用于执行上述第一方面的各种可能的实现方 式中的任一方法。
第五方面,本申请实施例提供了一种网络设备,该网络设备包括处理器,所述处理器耦合至存储器,所述处理器执行所述存储器上的程序时实现如上述第一方面或基于该第一方面的各种可能的实现方式中的任一方法。
本实施例中的网络设备的各实施方式的效果可以参考上面相应地方的描述以及说明书中相关部分的描述,此处不再一一赘述。
第八方面,本申请实施例提供了一种计算机可读存储介质,该计算机存储介质中存储有程序代码,该程序代码用于指示执行第一方面或基于该第一方面的各种可能的实现方式中的任一方法。
以上计算机可读存储介质的各实施方式的效果可以参考上面相应地方的描述以及说明书中相关部分的描述,此处不再一一赘述。
附图说明
图1为本申请实施例提供的一种报文入队以及出队时序示意图;
图2为本申请实施例提供的一种报文传输系统架构示意图;
图3为本申请实施例提供的流量资源预留方法流程图;
图4为本申请实施例提供的一种调度优先级示意图;
图5为本申请实施例提供的一种网络设备发送报文的方法流程图;
图6为本申请实施例提供的另一种网络设备发送报文的方法程流程图;
图7为本申请实施例提供的再一种网络设备发送报文的方法程流程图;
图8为本申请实施例提供的一种报文出队的流程图;
图9为本申请实施例提供的一种令牌桶更新的方法流程图;
图10为本申请实施例提供的一种网络设备的结构示意图;
图11为本申请实施例提供的另一种网络设备的结构示意图。
图12为本申请实施例提供的再一种网络设备的结构示意图。
具体实施方式
下面将结合附图对本发明作进一步地详细描述。
本申请实施例可应用于可传输时延敏感流量的传输系统中。该传输系统可位于二层交换网络或三层交换网络。如图2所示,图2为本申请实施例提供的传输系统的结构示意图,该传输系统中可包括源端设备201、至少一个网络设备202、目的设备203以及网络控制面204.
其中,源端设备201可以是工业控制网络场景中的控制主机,或者,工业传感器,或者物联网场景中的传感器等。网络设备202可以是交换设备或路由器等。目的设备203可以是执行器(如,工业控制网络场景中的伺服电机或者物联网场景中的信息处理中心等)。网络控制面204可以是控制器或管理服务器。
在传输系统初始化阶段,由网络控制面204设置统一的时间窗大小(例如,125us为一个时间窗),并将设置的时间窗大小发送给传输系统中的各网络设备202。需要说明的是,网络控制面204也可以仅将时间窗大小发送给参与时延敏感流量传输的网络设备202。网络控制面204可以根据网络拓扑以及现有的报文转发规则来确定参与时延敏感流量传输的网络设备202。网络设备202按照网络控制面204设置的时间窗大小配置本地各端口的时间窗的边 界相位。通过上述初始化,传输系统中的各网络设备202中的时间窗被设置为大小相同,但边界可不对齐的时间窗。在后续传输报文的过程中,可根据真实时间计算或查表得到当前的时间窗。
在时延敏感网络领域,时间窗是一个重要的概念。时间窗是一段连续时间。通常情况下,网络控制面304将网络设备的出端口的时间划分为多个时间窗周期性循环的过程,例如“时间窗1,时间窗2,时间窗3,时间窗1……”。每个时间窗根据链路的速率,拥有一定的数据发送能力。例如,对于10Gbit的链路,125us大小的时间窗,即可在一个时间窗内发送1250Kbit数据(约100个1.5KB的报文)。因此,在后续传输报文的过程中可根据时间窗对报文的入队或出队进行控制。
上述传输系统应用在传输时延敏感流量的场景中时,在传输报文前,还需要在该时延敏感流量传输的端到端路径上的各网络设备中为其预留流量资源,从而防止该延时敏感流量发生不可预测的拥塞,产生额外的排队时延。该流量资源预留过程如图3所示。
S301,源端设备向网络控制面发送流量申请请求。
由于要保证时延敏感流量的端到端时延,时延敏感流量在发送前一般需要进行流量申请。流量申请一般按照“若干报文每时间窗”(一般称为流量特征)或者相应的流量速率进行申请。
该流量申请请求中携带要申请的流的信息以及流量特征。其中,流的信息可以包括能标识该流的信息(如,源地址、目的地址、目的端口号、差分服务代码点(Differentiated Services Code Point,DSCP)、或协议)等。
S302,网络控制面根据该流所在路径上的各网络设备中相应的出端口的剩余发送能力确定是否接受请求。
其中,网络设备中相应的出端口为网络设备中用于发送该流的端口。网络控制面可以根据传输系统的网络拓扑以及现有的报文转发规则来确定该流的传输路径。该传输路径包括传输该流的网络设备,以及在网络设备中的出端口。
如果该流量申请请求中请求的流量特征大于该路径上任一出端口目前剩余的发送能力,则请求失败,网络控制面拒绝该请求并反馈失败信息给源端设备。如果该流量申请请求中请求的流量特征小于或等于该路径上任一出端口目前剩余的发送能力,则请求成功,网络控制面接受该请求并反馈成功信息给源端设备,并更新该路径上的各网络设备中出端口的发送能力。例如,对于125us,10Gbps的网络,流1申请每周期发送90个1.5KB报文,申请成功;流1申请完成之后,每时间窗还剩约10个报文的发送能力,然后流2再申请时间窗发送50个1.5KB报文,请求失败。
其中,端口的发送能力即该端口在一个时间窗内的发送报文的最大数量。该发送能力可通过链路带宽,窗口大小,最大报文大小计算得到。
S303,网络控制面接受该流量申请请求后,向该流所在路径上的各网络设备发送流量申请成功的通知。
网络控制面可采用控制报文来发送该通知。该通知中包括申请资源预留的流的信息以及流量特征。其中,该流的信息以及流量特征可以是从步骤S301的流量申请请求中获得的。
需要说明的是,图中仅示出了一个网络设备,实际中网络控制面会向该路径上的各网络设备发送流量申请成功的通知。
S304,接收到通知的各网络设备对该流进行流量资源预留配置。
该流量资源预留配置主要包括:
1.记录该流的信息。具体的,可将该流的信息更新到网络设备的流表中。在后续传输报文的过程中,网络设备可依据流表来识别该流。识别该流可采用解析该流的源、目的IP、端口号、DSCP、或协议等方法。
2.配置该流的流量资源预留信息,该流量资源预留信息中包括流量特征(即,该流在一个时间窗内可发送的报文数)。
在执行完上述资源预留流程后,网络设备就可以根据资源预留过程中配置的信息对流进行传输了。
在本申请实施例传输报文的过程中,对于时延敏感流量的报文,网络设备根据该流在一个时间窗内可发送的报文数以及一个时间窗内累计发送报文的情况来将该报文安排在特定的时间窗发出。
与现有技术不同的是,报文的输出时间窗是根据实时的信息(如,一个时间窗内累计发送报文的情况)动态确定的,而不是完全按照静态配置来确定。采用这种方式,可以增加发送报文过程中的灵活性,传输设备只需保证每个时间窗内发送的报文数量符合一个时间窗内可发送的报文数就可以了,而不用对每个报文的发送时间进行约束。也就是说,网络设备可以在一个时间窗内发送的报文数量达到可发送的报文数量后,将下一报文安排到下一个时间窗来发送。因此,采用这种基于一个时间窗内可发送的报文数以及一个时间窗内累计发送报文的情况来安排报文的发送时间窗的方案,网络设备就可将上游设备在一个时间窗内任意时间点发送的报文,安排到一个合适的时间窗发出,从而避免了对上游设备发送报文的时间产生约束,能够让上游设备在几乎整个时间窗内都可以发送延时敏感流量。而且,报文的发送时间窗是按照一个时间窗内可发送的报文数以及一个时间窗内累计发送报文的情况来确定的,也能保证每个时间窗内发送的报文满足时间窗内发送报文的数量要求,符合延时敏感流量的流量特征要求。另外,由于每个时间窗都能按照要求的数量来发送报文,从结果上来说,延时敏感流量也仍然具有可承诺的端到端延时。因此,本申请实施例中采用的这种动态安排的机制可以在保证延时敏感流量具有可承诺的端到端延时,以及网络中各网络设备的输出流量仍然满足流量特征的前提下,提升时延敏感流量的可利用带宽,减少带宽的资源浪费。
下面对本申请实施例提供的对延时敏感流量的报文的传输方法进行详细说明。
为了便于理解,首先简单介绍一下网络设备的队列机制。
网络设备中,每个端口都设置有用于缓存报文的队列。报文进入到网络设备后,先进入出端口的缓存队列中,再按照队列的调度机制出队,并发送。为了更好的区分流量等级,网络设备通常采用不同级别的队列机制。也就是说,网络设备中的一个端口可以设置多个不同级别的缓存队列。在出队调度中,高优先级的队列会被优先调度。如图4所示,图4为调度优先级示意图。在图4中,队列1、队列2和队列3为最高优先级队列,队列4至队列8为低优先级队列,当队列2以及队列4至队列8都打开的情况下,网络设备优先调度队列2中的报文发送,然后再调度队列4至队列8中的报文。对于可传输延时敏感流量的网络设备,网络设备会预先在时延敏感流量的出端口上预留用于发送时延敏感流量的队列。这些队列通常是拥有最高优先级的队列,如图4中的队列1、队列2和队列3。
根据上述队列机制,网络设备在发送报文的过程中通常包括两个过程,一个是将接收到的报文加入到队列的入队过程,另一个是将报文从队列调度发出的出队过程。
针对这两个过程本申请实施例提出了两种优化报文发送的方案。方案一:主要涉及对报文的入队过程的改进。在该方案中,可基于一个时间窗内可发送的报文数以及一个时间窗内累计发送报文的情况来控制报文的入队。网络设备可基于一个时间窗内可发送的报文数以及 一个时间窗内累计发送报文的情况动态确定报文所属流的入队队列,并对预先配置的流的入队队列进行更新。
方案二:主要涉及对报文的出队过程的改进。在该方案中,可以基于一个时间窗内可发送的报文数以及一个时间窗内累计发送报文的情况来控制报文的出队。网络设备可根据一个时间窗内可发送的报文数来设置出队门控,并通过出队门控对报文在一个时间窗内的出队数量进行控制。
下面分别通过实施例一和实施例二对这两种方案进行详细介绍。
实施例一
本实施例中,网络设备预先为时延敏感流量预留至少3个队列,并基于时间窗配置这些队列的入队时序以及输出时序。入队时序是指各队列成为入队队列的时序。其中,入队队列为报文可进入的队列。基于时间窗的入队时序用于定义每个时间窗对应的入队队列。入队时序在报文入队阶段使用,用于确定每个时间窗内报文的入队队列。输出时序是指在出队调度阶段,各队列的打开时序。基于时间窗的输出时序用于定义每个时间窗内各队列的开关状态。输出时序在报文出队阶段使用,用于控制每个时间窗内各队列的开关。入队时序和输出时序可采用表结构存储,也可采用其它存储结构(如,数组等)存储。如表1和表2所示:
时间窗数值 入队队列 备选入队队列
时间窗KN+1 队列2 队列3
时间窗KN+2 队列3 队列4
时间窗KN+3 队列4 队列5
…… …… ……
时间窗KN+K-1 队列K 队列1
时间窗KN+K 队列1 队列2
表1
时间窗KN+1 队列1打开 队列2关闭 队列3关闭 …… 队列K关闭 其它队列打开
时间窗KN+2 队列1关闭 队列2打开 队列3关闭 …… 队列K关闭 其它队列打开
时间窗KN+3 队列1关闭 队列2关闭 队列3打开 …… 队列K关闭 其它队列打开
……            
时间窗KN+K 队列1关闭 队列2关闭 队列3关闭 …… 队列K打开 其它队列打开
表2
表1存储的为入队时序。表2存储的为输出时序。表1和表2中的N为大于等于0的整数,K为发送时延敏感流量的队列数,取值为大于等于3的整数。
在表1所示的入队时序中,每个时间窗除了有一个入队队列,还有一个备选入队队列。这样,当一个流在一个时间窗的入队队列中的报文数已达到该流在一个时间窗内可发送的报文数,可将该流候选的报文放入这个时间窗的备选入队队列中。其中,可以将上一时间窗的备选入队队列设置为下一时间窗的入队队列,这样,就可将上一时间窗多余的报文放入到下一时间窗的入队队列中。
在表2所示的输出时序中,第M个时间窗的入队队列在第M+1个时间窗为打开状态,在 其它时间窗为关闭状态,M为大于等于1的整数。另外,队列1至队列K为用于发送时延敏感流量的高优先级队列,其它队列为发送其它流量的更低优先级的队列。
表1和表2仅是一种示例,在实际应用中,入队时序中也可不配置备选入队队列,而在上一时间窗的入队队列中的报文数达到一个时间窗内可发送的报文数时,直接放入下一时间窗的入队队列中。
需要说明的是,不同的时延敏感流量可以共享相同的队列,当队列被共享时,队列的入队时序以及输出时序也共享。
以上预留队列以及配置入队时序和输出时序的过程可称为队列资源预留。队列资源预留可以在图3所示的流量资源预留之前的任意时间完成(例如,可以传输系统初始化阶段完成),也可以在需使用到该队列资源的第一次的流量资源预留阶段完成。
本实施例中,网络设备在步骤S304配置的流量资源预留信息中,还配置用于记录流的入队队列的入队队列项,以及用于对该入队队列中的报文进行计数的报文计数项。报文计数项中记录的报文计数可表示该流的入队队列中的报文数。入队队列项中配置的该流初始的入队队列为入队时序中第一个时间窗的入队队列。流的入队队列以及该入队队列的报文数是流状态信息,可以在报文发送过程中根据实时的状态进行更新。入队队列项可根据报文发送过程中确定出的入队队列进行更新。流量资源预留信息可采用表结构存储,也可采用其它存储结构(如,数组等)存储。其结构如表3所示:
流号 报文计数 预留资源信息 入队队列
流1 1 每时间窗一个报文 队列2
流2 1 每时间窗一个报文 队列2
表3
网络设备根据上述队列资源预留和流量资源预留过程中配置的信息进行报文传输。如图5所示,图5为实施例一中网络设备发送报文的方法流程图。
其中,5a-5e为报文的入队过程,5f-5h为报文的出队过程。
5a.网络设备接收来自上游设备的报文。
网络设备接收到报文后,确定该报文的出端口。确定出端口的方法可采用现有技术来实现,这里不再赘述。
5b.网络设备识别该报文所属的流是否为时延敏感流量。
网络设备可解析出报文头内部的源、目的IP、端口、DSCP、协议号等信息,并利用解析出的信息在资源预留过程中记录的流的信息中进行查找,来识别该报文所属的流是否为时延敏感流量。本实施例以该报文所属的流为时延敏感流量为例进行说明。对于非时延敏感流量的报文,网络设备将其放入其它更低优先级的队列中,其入队以及调度过程采用现有技术,本申请实施例中不再赘述。
5c.网络设备在识别出该流为时延敏感流量后,确定该报文的到达时间窗。
其中,到达时间窗为报文到达网络设备时该报文在网络设备中的出端口的时间窗。
在具体实施过程中,网络设备可以先获取接收到报文时的当前时间。网络设备可根据传输系统中的时钟晶振获得当前时间,也可通过报文的内容中包含的时间信息获得当前时间。
网络设备根据5a中确定的出端口的端口号和当前时间进行运算或查表得到出端口当前的时间窗。其中,该计算或查表配置可以在传输系统初始化时配置好。
5d.网络设备根据报文的到达时间窗,队列的入队时序,报文所属流在一个时间窗内可发送的报文数,当前的入队队列中的报文数确定该流的入队队列。
网络设备可从流量资源预留信息中记录的流量特征,获得该流在一个时间窗内可发送的报文数,并可从流量资源预留信息中的报文计数获得当前的入队队列中的报文数。
具体的,网络设备在当前的入队队列的报文数已达到一个时间窗内可发送的报文数的情况下,将到达时间窗的下一时间窗的入队队列确定为该流的入队队列。在当前的入队队列的报文数未达到一个时间窗内可发送的报文数的情况下,将到达时间窗的入队队列确定为该流的入队队列。
上述确定方法是入队时序中未为时间窗设置备选入队队列的处理方式。
入队时序中为时间窗设置有备选入队队列的实施方式如下:
在设置入队时序时,网络设备将上一个时间窗的备选入队队列为下一个时间窗的入队队列,如表1所示。确定流的入队队列的过程具体包括:网络设备在当前的入队队列的报文数已达到一个时间窗内可发送的报文数的情况下,将到达时间窗的备选入队队列确定为该流的入队队列。在当前的入队队列的报文数未达到一个时间窗内可发送的报文数的情况下,将到达时间窗的入队队列确定为该流的入队队列。
确定出流的入队队列后,网络设备可根据确定的入队队列更新网络设备中记录的该流的入队队列。该更新具体为,在确定出的入队队列相对记录的入队队列发生变化的情况下进行更新。
所述网络设备在每次更新流量资源预留信息中的入队队列后,将流量资源预留信息中记录的报文数恢复初始值。该初始值可以设置为零,后续进行累加,累加记录的数值即为当前的入队队列中的报文数。该初始值也可设置为任意一个值,后续进行递减,初始值与递减记录的数值的差值即为当前的入队队列中的报文数。
5e.网络设备将该报文加入到确定出的该流的入队队列中。
网络设备将该报文加入到确定出的该流的入队队列中后,还进一步对将流量资源预留信息中记录的报文数进行累计。这里的累计可以才累加的方式也可采用递减的方式。当报文数的初始值设置为零时,采用累加的方式进行累计。当报文数的初始值设置为任意值时,采用递减的方式进行累计。
5f.网络设备获取出端口的时间窗。
网络设备获取出端口的时间窗的方法可参考步骤5c,这里不再赘述。
5g.网络设备根据输出时序确定当前时间窗处于打开状态的队列。
这里的当前时间窗是步骤5g中获取的时间窗。假设当前时间窗是时间窗2,在表2所示的输出时序中,打开的队列为队列2以及其他更低优先级的队列。
5h.网络设备调度打开状态的队列中的报文进行发送。
在调度过程中,优先调度发送打开队列中的高优先级队列中的报文。按优先级的调度可参考图4,这里不再赘述。
按照步骤5f至步骤5h的调度过程,步骤5a接收到的报文所在的队列将在输出时序中定义的打开该队列的时间窗被打开,然后发送出去。
在上述方案中,队列的入队时序是基于时间窗的,报文的出队也是基于时间窗进行调度。也就是说,确定了流的入队队列,也就确定了加入到该入队队列的报文的输出时间窗。因此,通过上述动态确定流的入队队列的方案,可以将接收到的每个报文都安排到一个特定的时间 窗发出。
上述实施例,通过定义入队时序、输出时序以及记录流状态信息,可以将上一时间窗的报文灵活的调整到下一时间窗的入队队列中,打破了静态配置入队队列对上游设备的发送时间的约束,能够让上游设备在几乎整个时间窗内都可以发送延时敏感流量提升时延敏感流量的可利用带宽,减少带宽的资源浪费。
另外,在上述实施例中,由于第M个时间窗的入队队列在第M+1个时间窗打开发送,按照上述方案中,网络设备在本地的第N个时间窗接收到的报文会在本地的第N+1(或N+2)个时间窗内发出,这样就能保证端到端的时延最大值是((端到端的跳数+1)*(时间窗大小))+路径上的时间窗边界差值之和。因此,本申请实施例可以提升时延敏感流量的可利用带宽的同时,也能保证延时敏感流量具有可承诺的端到端时延。而且,本申请实施例提供的方案也无需要求全网的时间窗对齐,可部署在不支持时间对齐的设备上,扩展了应用性。
下面结合图6对图5所示的步骤5d的具体实现进行详细介绍。
如图6所示,图6为实施例1中的确定入队队列的方法的流程图。该方法包括:
S601.网络设备判断报文的到达时间窗与上一报文的到达时间窗是否是同一个时间窗,如果是同一时间窗,则执行步骤S602,如果不是同一时间窗,则执行步骤S606。
S602,网络设备判断流量资源预留信息中的报文计数是否等于一个时间窗内可发送的报文数量。如果报文计数等于可发送的报文数量,则确定该流的入队队列为到达时间窗的备选入队队列,执行S603-S605;如果报文计数不等于可发送的报文数量,则确定该流的入队队列为到达时间窗的入队队列,执行S604-S605。
本实施例以流量资源预留信息中的报文计数的初始值为零,而且采用每加入一报文,该计数加1的方式进行累计为例。在这种方式下,可以直接将流量资源预留信息中的报文计数同流的流量特征(即该流在一个时间窗内可发送的报文数量)进行比较。
本实施例以入队时序表中配置有备选入队队列为例进行说明。
S603,网络设备将流量资源预留信息中记录的入队队列更新为该报文的到达时间窗在入队时序表中的备选入队队列,并将流量资源预留信息中的报文计数清零。
S604,根据流量资源预留信息中记录的入队队列,将报文加入到该入队队列中。
需要说明的是,本申请实施例中将报文加入到S602中确定出的入队队列有两种实现方式。方式一:先更新流量资源预留信息中记录的入队队列,再根据记录的入队队列进行入队。需要说明的是,对于无需更新的情况(即确定出的入队队列相比记录的入队队列无变化),则直接根据记录的入队队列进行入队。方式二:直接根据确定出入队队列入队。采用直接根据确定出入队队列入队方式,可以在入队前,也可以在入队后执行更新流量资源预留信息中记录的入队队列的操作,两者之间必然的先后关系。
本实施例以方式一为例进行说明。
S605,网络设备将流量资源预留信息中的报文计数加1。
S606,网络设备判断报文的到达时间窗是否为上一报文的到达时间窗的下一个时间窗,如果是上一报文的到达时间窗的下一个时间窗,则执行S607,如果不是上一报文的到达时间窗的下一个时间窗,则执行S608。
如果报文的到达时间窗是上一报文的到达时间窗的下一个时间窗,则说明该报文是其到达时间窗内的第一个报文。由于步骤S606是步骤S601的否定分支,因此如果报文的到达时间窗不是上一报文的到达时间窗的下一个时间窗,那则是上一报文的到达时间窗的下一个时间窗之后的时间窗,这种情况下,该报文也是其到达时间窗内的第一个报文。也就是说,只 要报文的到达时间窗与上一报文的到达时间窗不是同一个时间窗,该报文就是其到达时间窗内的第一个报文。
S607,判断流量资源预留信息中的入队队列是否为该报文的到达时间窗在入队时序中的入队队列,如果是入队时序中的入队队列,则执行S602,如果不是入队时序中的入队队列,则执行S608。
如果流量资源预留信息中的入队队列是报文的到达时间窗在入队时序中的入队队列,则说明该报文的到达时间窗的上一个时间窗使用了备用入队队列。如果流量资源预留信息中的入队队列不是报文的到达时间窗在入队时序中的入队队列,则说明该报文的到达时间窗的上一个时间窗未使用备用入队队列。
S608,网络设备将流量资源预留信息中记录的入队队列更新为到达时间窗对应的入队队列,并将流量资源预留信息中的报文计数清零,然后继续执行S604-S605。
实施例二
本实施例中,网络设备同样需要进行队列资源预留和流量资源预留。
本实施例中的队列资源预留于实施例一中的队列资源预留方式不同。在本实施例中,网络设备为每个时延敏感流量配置一个一一对应的队列。也就是说,本实施例中的队列是基于流来分配的,而不是基于时间窗。因此,在本实施例中也没有基于时间窗的入队时序和输出时序。本实施例中,网络设备分别还为每个流的队列配置由出队门控。该出队门控用于控制在每个时间窗内发送的报文数。因此,本实施例的队列资源预留信息中除了包括流与该流的队列的对应关系,还包括每个流的队列的出队门控。
本实施例中的流量资源预留过程可参考图3所示的实施例,这里不再赘述。
本实施例中的上述队列资源预留可以在流量资源预留时执行,也可以在流量资源预留前执行。
网络设备根据上述队列资源预留和流量资源预留过程中配置的信息进行报文传输。如图7所示,图7为实施例二中网络设备发送报文的方法流程图。
其中,7a-7c为报文的入队过程,7d为报文的出队过程。
7a.网络设备接收来自上游设备的报文。
7b.网络设备识别该报文所属的流是否为时延敏感流量。
该步骤的具体实现可参考图5所示实施例中的步骤5b,这里不再赘述。
本实施例同样以该报文所属的流为时延敏感流量为例进行说明。对于非时延敏感流量的报文,网络设备将其放入其它更低优先级的队列中,其入队以及调度过程采用现有技术,本申请实施例中不再赘述。
7c.网络设备在识别出该流为时延敏感流量后,将该报文加入到与该报文所属的流对应的队列中。
7d.网络设备按照出队门控从该流对应的队列中取出报文进行发送。
具体的,网络设备实时检查该流的队列以及出队门控,在出队门控不为零且队列不为空时,从队列中取出报文进行发送。本申请实施例中,出队门控按时间窗进行更新。出队门控在每个时间窗内的初始值为对应的流在一个时间窗内可发送的报文数,并根据每个时间窗内发送的报文数量递减。
在具体实现时,该出队门控可以采用令牌桶来实现。对出队门控的更新则是更新令牌桶中令牌的数量。
具体的,采用令牌桶来实现时,步骤7d的实现过程如图8所示。
图8为调度报文出队的流程图,该过程包括:
S801,网络设备实时检查各时延敏感流量的队列中的报文和令牌桶中的令牌,确定是否存在任一队列满足队列非空且该队列的令牌桶中存在令牌,若存在,执行步骤S802,取出报文发送。若不存在,执行S803。
S802,网络设备从队列中取出报文进行发送,并根令牌桶中的令牌数减1,返回S801。S803,网络设备调度更低优先级的队列中的报文进行发送。
步骤S803可采用现有技术来实现,例如根据优先级调度或轮询调度,调度非时延敏感流量对应的队列出队发送报文,这里不予以一一赘述。
下面以出队门控为令牌桶为例,对出队门控的更新过程进行详细说明。
如图9所示,图9为令牌桶的更新方法流程图。该方法包括:
S901,网络设备获取传输时延敏感流量的出端口的时间窗。
获取出端口的时间窗的方法可参考图5所示的步骤5a,这里不再赘述。
S902,网络设备判断时间窗是否更新,若时间窗更新,则执行S903,若时间窗未更新,则返回S901。
其中,判断时间窗是否更新是与上一次更新令牌桶时的时间窗比较。
S903,网络设备获取各流的流量资源预留信息中的流量特征(即各流在一个时间窗内可发送的报文数)。
S904,网络设备将各流的队列的令牌桶中的令牌数分别更新为各流各自可在一个时间窗内可发送的报文数。本方案中,由于出队门控是根据流在一个时间窗内可发送的报文数更新的,而且出队门控的值隐含了一个时间窗内已累计发送的报文数,因此,本实施例中也是基于流在一个时间窗内可发送的报文数以及一个时间窗内累计发送报文的情况将报文安排在一个特定时间窗发出的。
由于本方案不对入队进行时间窗上的限制,而是在出队过程中来控制每个时间窗内的发送报文,因此对接收到报文的时间没有要求,也不会对上游设备的发送时间的产生约束,能够让上游设备在几乎整个时间窗内都可以发送延时敏感流量提升时延敏感流量的可利用带宽,减少带宽的资源浪费。
另外,在上述实施例中,由于每个时间窗发送的报文数都能得到保证,这样就能保证端到端的时延具有可承诺的上限。因此,本申请实施例可以提升时延敏感流量的可利用带宽的同时,也能保证延时敏感流量具有可承诺的端到端延。而且,本申请实施例提供的方案也无需要求全网的时间窗对齐,可部署在不支持时间对齐的设备上,扩展了应用性。
下面结合附图对上述图1-图9所示实施例中所涉及的网络设备1000以及网络设备1100。其中网络设备1000应用在上述图2-6所述的实施例中。网络设备1100应用在上述图2-3以及图7-9所述的实施例中,下面一一予以描述。
如图10所示,图10为本申请实施例提供的一种网络设备1000的结构示意图。所述网络设备1000包括:接收模块1002和处理模块1004。
其中,所述接收模块1002,用于接收报文;该接收模块1002具体详细的处理功能或者可以执行的步骤可以参考上述图5所示实施例中的5a中的详细描述。
所述处理模块1004,用于识别识别出所述报文所属的流为已预留资源的时延敏感流量,所述已预留资源中包括所述流在一个时间窗内可发送的报文数,根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况用于发送所述流的队列中已存在 的报文数将所述报文安排在特定的时间窗发出。该处理模块1004具体详细的处理功能或者可以执行的步骤可以参考上述图5所示实施例中的5b-5h中的详细描述,以及图6中S601-S608中的详细描述。
在一种具体的实施例中,所述网络设备还包括第一存储模块1006;所述第一存储模块1006用于存储有预先设置的用于发送所述流的队列资源预留信息以及流量资源预留信息;所述队列资源预留信息中包括用于发送所述流的队列以及所述队列的入队时序和输出时序,所述入队时序用于定义每个时间窗的入队队列,所述输出时序用于定义每个时间窗内各队列的开关状态;所述流量资源预留信息中记录有所述流当前的入队队列以及用于表示所述当前的入队队列中的报文数的报文计数;所述一个时间窗内累计发送报文的情况为所述所述当前的入队队列中的报文数;
相应的,所述处理模块根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内已累计发送的报文数将所述报文安排在特定的时间窗发出,具体包括:
所述处理模块确定所述报文的到达时间窗,所述到达时间窗为所述报文到达时所述网络设备时出端口的时间窗;查询当前的入队队列中的报文数;根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列;将所述报文加入到确定出的所述流的入队队列中;在所述输出时序中定义的打开所述报文所在队列的时间窗打开所述报文所在队列,将所述报文发出。
该第一存储模块1006具体详细的内容可以参考上述图3所示实施例中的步骤S304的详细描述,以及上述表1-3及其对应文字部分的详细描述。
该处理模块1004具体详细的处理功能或者可以执行的步骤可以参考上述图5所示实施例中的5b-5h中的详细描述,以及图6中S601-S608中的详细描述。
在一种具体的实施例中,所述网络设备进一步包括资第一源预留模块1008,所述第一资源预留模块1008,用于预先为所述流预留资源,在所述预留资源的过程中配置所述流量资源预留信息。
该第一资源预留模块1008具体详细的处理功能或者可以执行的步骤可以参考上述图3所示实施例中的步骤S304的详细描述,以及上述表1-3及其对应文字部分的详细描述。
如图11所示,图11为本申请实施例提供的一种网络设备1100的结构示意图。所述网络设备1100包括:接收模块1102和处理模块1104。
所述接收模块1102,用于接收报文;该接收模块1102具体详细的处理功能或者可以执行的步骤可以参考上述图7所示实施例中的7a中的详细描述。
所述处理模块1104,用于识别识别出所述报文所属的流为已预留资源的时延敏感流量,所述已预留资源中包括所述流在一个时间窗内可发送的报文数,根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况用于发送所述流的队列中已存在的报文数将所述报文安排在特定的时间窗发出。
该处理模块1104具体详细的处理功能或者可以执行的步骤可以参考上述图7所示实施例中的7b-7d中的详细描述,以及图8和9中的详细描述。
在一种具体的实施例中,所述网络设备进一步包括第二存储模块1106,所述第二存储模块1106,用于存储预先配置的用于发送所述流的队列资源预留信息以及流量资源预留信息,所述队列资源预留信息中包括与所述流一一对应的队列以及为所述队列配置的出队门控,所述出队门控用于控制在每个时间窗内发送的报文数;所述流量资源预留信息中包括所述流在一个时间窗内可发送的报文数;
相应的,所述处理模块1104根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况将所述报文安排在特定的时间窗发出,具体包括:
所述处理模块1104将所述报文加入到与所述报文所属的流对应的队列中;按照所述出队门控从所述流对应的队列中取出报文进行发送,所述出队门控按时间窗进行更新;所述出队门控在每个时间窗内的初始值为所述队列对应的流在一个时间窗内可发送的报文数,并根据每个时间窗内发送的报文数量递减。
该第二存储模块1106具体详细的内容可以参考上述图3所示实施例中的步骤S304的详细描述。
该处理模块1104具体详细的处理功能或者可以执行的步骤可以参考上述图7所示实施例中的7b-7d中的详细描述,以及图8和9中的详细描述。
在一种具体的实施例中,所述网络设备还包括第二资源预留模块1108;所述第二资源预留模块1108用于,预先为所述流预留资源,在所述预留资源的过程中配置所述流量资源预留信息和队列资源预留信息。
该第二存储模块1108具体详细的处理功能或者可以执行的步骤可以参考上述图3所示的实施例中步骤S304以及图9所示实施例中的详细描述。
应该理解到,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块、单元或组件可以结合或者其中的某个模块可以进一步划分成不同的功能模块。例如上述实施例中的网络设备中的第一资源预留模块的功能也可能与所述处理模块合并在一个模块中。此外,需要说明的是,图中显示或描述的模块或设备之间的耦合或通信连接可以是通过一些接口、装置或单元而构成的间接耦合或通信连接,也可以是电性,机械或其它的形式的耦合或连接。
对于其中作为分离部件说明的模块可以是物理上分开的,也可以是物理上在同一个物理部件中。以模块命名的部件可以是硬件单元、也可以是软件模块或者逻辑单元、或者是硬件和软件的结合,该模块既可以位于一个网元内,或者也可以分布到多个网元上。这些都可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
图12为本申请实施例中所涉及的网络设备1200的一种可能的结构示意图。该网络设备1200可以应用在上述图2-9所示的实施例中。在本实施例中,网络设备的功能或操作步骤由一个通用的计算机或服务器中的一个或多个处理器通过执行存储器中的程序代码来实施。在这种实施方式下,该网络设备1200包括:收发器1210、处理器1220、随机存取存储器1240、只读存储器1250以及总线1260。
其中,处理器1220通过总线1260分别耦接收发器1210、随机存取存储器1240以及只读存储器1250。
处理器1220可以是一个通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制本发明方案程序执行的集成电路。
总线1260可包括一通路,在上述组件之间传送信息。
收发器1210,用于与其他设备或通信网络通信,如以太网,无线接入网(RAN),无线局域网(Wireless Local Area Networks,WLAN)等,在本发明实施例中,收发器1110可用于与网络控制面、源端设备或其他网络设备进行通信。
在一种具体的实施方式中,所述随机存取存储器1240可加载实现图2-6所示实施例中的网络设备的应用程序代码,并由处理器1220来控制执行。
在另一种具体的实施方式中,所述随机存取存储器1240可加载实现图2-3以及图7-9所述的实施例中的网络设备的应用程序代码,并由处理器1220来控制执行。
当需要运行网络设备1200时,通过固化在只读存储器1250中的基本输入输出系统或者嵌入式系统中的bootloader引导系统进行启动,引导网络设备1200进入正常运行状态。在网络设备1200进入正常运行状态后,处理器1220在随机存取存储器1240中运行应用程序和操作系统,使得所述网络设备1200可分别执行图2-6所示实施例中的功能和操作,或者图2-3以及图7-9所示的实施例中的功能和操作。
其中,与网络控制面或者其他网络设或者源端设备的交互由收发器1210在处理器1220的控制下完成,网络设备1200的内部处理由处理器1120完成。
需要说明的是,上述实施方式中,除了上述几种通过处理器执行存储器上的程序代码指令方式等常规方式之外,本实施方式也可以基于物理服务器结合网络功能虚拟化NFV技术实现的虚拟网络设备,所述虚拟网络设备可以为虚拟交换机或者路由器或其他转发设备。本领域技术人员通过阅读本申请即可结合NFV技术在物理服务器上虚拟出具有上述功能的多个网络设备。此处不再赘述。
本发明实施例还提供了一种计算机存储介质,用于储存为上述网络设备所用的计算机软件指令,其包含用于执行上述图2-6所示的实施例中的网络设备的功能所涉及的程序。
本发明实施例还提供了另一种计算机存储介质,用于储存为上述网络设备所用的计算机软件指令,其包含用于执行上述图图2-3以及图7-9所示的实施例中的网络设备的功能所涉及的程序。
尽管在此结合各实施例对本发明进行了描述,然而,在实施所要求保护的本发明过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
本领域技术人员应明白,本申请的实施例可提供为方法、装置(设备)、或计算机程序产品。因此,本申请实施例可采用硬件实施例、软件实施例、或软件和硬件相结合的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机程序存储/分布在合适的介质中,与其它硬件一起提供或作为硬件的一部分,也可以采用其他分布形式,如通过Internet或其它有线或无线电信系统。
本发明是参照本发明实施例的方法、装置(设备)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算 机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管结合具体特征及其实施例对本发明进行了描述,显而易见的,在不脱离本发明的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本发明的示例性说明,且视为已覆盖本发明范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (24)

  1. 一种报文发送的方法,其特征在于,所述方法包括:
    网络设备接收报文;
    所述网络设备识别出所述报文所属的流为已预留资源的时延敏感流量,所述已预留资源中包括所述流在一个时间窗内可发送的报文数;
    所述网络设备根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况用于发送所述流的队列中已存在的报文数将所述报文安排在特定的时间窗发出。
  2. 如权利要求1所述的方法,其特征在于,所述网络设备中预先配置有用于发送所述流的队列资源预留信息以及流量资源预留信息;所述队列资源预留信息中包括用于发送所述流的队列以及所述队列的入队时序和输出时序,所述入队时序用于定义每个时间窗的入队队列,所述输出时序用于定义每个时间窗内各队列的开关状态;所述流量资源预留信息中记录有所述流当前的入队队列以及用于表示所述当前的入队队列中的报文数的报文计数;所述一个时间窗内累计发送报文的情况为所述所述当前的入队队列中的报文数;
    所述网络设备根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内已累计发送的报文数将所述报文安排在特定的时间窗发出,包括:
    所述网络设备确定所述报文的到达时间窗,所述到达时间窗为所述报文到达时所述网络设备时出端口的时间窗;
    所述网络设备查询当前的入队队列中的报文数;
    所述网络设备根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列;
    所述网络设备将所述报文加入到确定出的所述流的入队队列中;
    所述网络设备在所述输出时序中定义的打开所述报文所在队列的时间窗打开所述报文所在队列,将所述报文发出。
  3. 如权利要求2所述的方法,其特征在于,所述网络设备根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列,包括:
    所述网络设备在所述当前的入队队列的报文数已达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的下一时间窗的入队队列确定为所述流的入队队列;在所述当前的入队队列的报文数未达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的入队队列确定为所述流的入队队列。
  4. 如权利要求2任一项所述的方法,其特征在于,所述入队时序中,一个时间窗还有一个备选入队队列,其中,上一个时间窗的备选入队队列为下一个时间窗的入队队列;
    所述网络设备根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列,包括:
    所述网络设备在所述当前的入队队列的报文数已达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的备选入队队列确定为所述流的入队队列;在所述当前的入队队列的报文数未达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的入队队列确定为所述流的入队队列。
  5. 如权利要求2-4任一项所述的方法,其特征在于,所述输出时序中,第M个时间窗 的入队队列在第M+1个时间窗为打开状态,在其它时间窗为关闭状态,M为大于等于1的整数。
  6. 如权利要求3-5任一项所述的方法,其特征在于,所述方法还包括:
    在确定出所述流的入队队列后,根据确定的入队队列更新所述流量资源预留信息中记录的入队队列;所述网络设备在每次更新所述记录的入队队列时,将所述流量资源预留信息中记录的报文数恢复初始值,并在每次向所述更新后的入队队列加入报文的时候,对所述报文数进行累计。
  7. 如权利要求1-6任一项所述的方法,其特征在于,所述方法还包括:
    所述网络设备预先为所述流预留资源,在所述预留资源的过程中配置所述流量资源预留信息。
  8. 如权利要求1所述的方法,其特征在于,所述网络设备中预先配置有用于发送所述流的队列资源预留信息以及流量资源预留信息,所述队列资源预留信息中包括与所述流一一对应的队列以及为所述队列配置的出队门控,所述出队门控用于控制在每个时间窗内发送的报文数;所述流量资源预留信息中包括所述流在一个时间窗内可发送的报文数;
    所述网络设备根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况将所述报文安排在特定的时间窗发出,包括:
    所述网络设备将所述报文加入到与所述报文所属的流对应的队列中;
    所述网络设备按照所述出队门控从所述流对应的队列中取出报文进行发送,所述出队门控按时间窗进行更新;所述出队门控在每个时间窗内的初始值为所述队列对应的流在一个时间窗内可发送的报文数,并根据每个时间窗内发送的报文数量递减。
  9. 如权利要求8所述的方法,其特征在于,所述方法还包括:所述网络设备监控时间窗的更新,在每当时间窗更新时,从所述流量资源预留信息中获取所述流在一个时间窗内可发送的报文数,按照所述流在一个时间窗内可发送的报文数更新所述出队门控。
  10. 如权利要求9所述的方法,其特征在于,所述出队门控为令牌桶;
    所述按照所述流在一个时间窗内可发送的报文数更新所述出队门控包括:将所述令牌桶中的令牌数量更新为所述流在一个时间窗内可发送的报文数。
  11. 如权利要求10所述的方法,其特征在于,按照所述出队门控从所述流对应的队列中取出报文进行发送,包括:所述网络设备实时检查所述流对应的队列中的报文和所述令牌桶中的令牌,在所述流对应的队列中存在报文且令牌桶中存在令牌的情况下,取出报文发送,直至令牌桶中令牌为空或所述流对应的队列中的报文为空。
  12. 如权利要求8-11任一项所述的方法,其特征在于,所述方法还包括:
    所述网络设备预先为所述流预留资源,在所述预留资源的过程中配置所述流量资源预留信息和队列资源预留信息。
  13. 一种网络设备,其特征在于,所述网络设备包括接收模块和处理模块;
    所述接收模块,用于接收报文;
    所述处理模块,用于识别识别出所述报文所属的流为已预留资源的时延敏感流量,所述已预留资源中包括所述流在一个时间窗内可发送的报文数,根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况用于发送所述流的队列中已存在的报文数将所述报文安排在特定的时间窗发出。
  14. 如权利要求13所述的网络设备,其特征在于,所述网络设备还包括第一存储模块;
    所述第一存储模块用于存储有预先设置的用于发送所述流的队列资源预留信息以及流量 资源预留信息;所述队列资源预留信息中包括用于发送所述流的队列以及所述队列的入队时序和输出时序,所述入队时序用于定义每个时间窗的入队队列,所述输出时序用于定义每个时间窗内各队列的开关状态;所述流量资源预留信息中记录有所述流当前的入队队列以及用于表示所述当前的入队队列中的报文数的报文计数;所述一个时间窗内累计发送报文的情况为所述所述当前的入队队列中的报文数;
    所述处理模块根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内已累计发送的报文数将所述报文安排在特定的时间窗发出,包括:
    所述处理模块确定所述报文的到达时间窗,所述到达时间窗为所述报文到达时所述网络设备时出端口的时间窗;
    所述处理模块查询当前的入队队列中的报文数;
    所述处理模块根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列;
    所述处理模块将所述报文加入到确定出的所述流的入队队列中;
    所述处理模块在所述输出时序中定义的打开所述报文所在队列的时间窗打开所述报文所在队列,将所述报文发出。
  15. 如权利要求14所述的网络设备,其特征在于,
    所述处理模块根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列,包括:
    所述处理在所述当前的入队队列的报文数已达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的下一时间窗的入队队列确定为所述流的入队队列;在所述当前的入队队列的报文数未达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的入队队列确定为所述流的入队队列。
  16. 如权利要求14所述的网络设备,其特征在于,所述入队时序中,一个时间窗还有一个备选入队队列,其中,上一个时间窗的备选入队队列为下一个时间窗的入队队列;
    所述处理模块根据所述到达时间窗,所述队列的入队时序,所述流在一个时间窗内可发送的报文数以及当前的入队队列的报文数确定所述流的入队队列,包括:
    所述处理模块在所述当前的入队队列的报文数已达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的备选入队队列确定为所述流的入队队列;在所述当前的入队队列的报文数未达到一个时间窗内可发送的报文数的情况下,将所述到达时间窗的入队队列确定为所述流的入队队列。
  17. 如权利要求14-16任一项所述的网络设备,其特征在于:所述输出时序中,第M个时间窗的入队队列在第M+1个时间窗为打开状态,在其它时间窗为关闭状态,M为大于等于1的整数。
  18. 如权利要求15-17任一项所述的网络设备,其特征在于,所述处理模块在确定出所述流的入队队列后,根据确定的入队队列更新所述流量资源预留信息中记录的入队队列;所述处理模块在每次更新所述记录的入队队列时,将所述流量资源预留信息中记录的报文数恢复初始值,并在每次向所述更新后的入队队列加入报文的时候,对所述报文数进行累计。
  19. 如权利要求13-18任一项所述的网络设备,其特征在于,所述网络设备还包括第一资源预留模块,所述第一资源预留模块,用于预先为所述流预留资源,在所述预留资源的过程中配置所述流量资源预留信息。
  20. 如权利要求13所述的网络设备,其特征在于,所述网络设备还包括第二存储模块, 所述第二存储模块,用于存储预先配置的用于发送所述流的队列资源预留信息以及流量资源预留信息,所述队列资源预留信息中包括与所述流一一对应的队列以及为所述队列配置的出队门控,所述出队门控用于控制在每个时间窗内发送的报文数;所述流量资源预留信息中包括所述流在一个时间窗内可发送的报文数;
    所述处理模块根据所述流在一个时间窗内可发送的报文数以及所述一个时间窗内累计发送报文的情况将所述报文安排在特定的时间窗发出,包括:
    所述处理模块将所述报文加入到与所述报文所属的流对应的队列中;
    所述处理模块按照所述出队门控从所述流对应的队列中取出报文进行发送,所述出队门控按时间窗进行更新;所述出队门控在每个时间窗内的初始值为所述队列对应的流在一个时间窗内可发送的报文数,并根据每个时间窗内发送的报文数量递减。
  21. 如权利要求20所述的网络设备,其特征在于,所述处理模块进一步用于,监控时间窗的更新,在每当时间窗更新时,从所述流量资源预留信息中获取所述流在一个时间窗内可发送的报文数,按照所述流在一个时间窗内可发送的报文数更新所述出队门控。
  22. 如权利要求21所述的网络设备,其特征在于:
    所述出队门控为令牌桶;
    所述按照所述流在一个时间窗内可发送的报文数更新所述出队门控包括:将所述令牌桶中的令牌数量更新为所述流在一个时间窗内可发送的报文数。
  23. 如权利要求22所述的方法,其特征在于,所述处理模块按照所述出队门控从所述流对应的队列中取出报文进行发送,包括:所述处理模块实时检查所述流对应的队列中的报文和所述令牌桶中的令牌,在所述流对应的队列中存在报文且令牌桶中存在令牌的情况下,取出报文发送,直至令牌桶中令牌为空或所述流对应的队列中的报文为空。
  24. 如权利要求20-23任一项所述的网络设备,其特征在于,所述网络设备还包括第二资源预留模块;所述第二资源预留模块用于,预先为所述流预留资源,在所述预留资源的过程中配置所述流量资源预留信息和队列资源预留信息。
PCT/CN2017/120430 2017-12-31 2017-12-31 一种发送报文的方法、设备和系统 WO2019127597A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/CN2017/120430 WO2019127597A1 (zh) 2017-12-31 2017-12-31 一种发送报文的方法、设备和系统
CN202211581634.1A CN116016371A (zh) 2017-12-31 2017-12-31 一种发送报文的方法、设备和系统
EP17936799.0A EP3720069A4 (en) 2017-12-31 2017-12-31 METHOD, DEVICE, AND SYSTEM FOR SENDING A MESSAGE
CN201780097888.7A CN111512602B (zh) 2017-12-31 2017-12-31 一种发送报文的方法、设备和系统
US16/916,580 US20200336435A1 (en) 2017-12-31 2020-06-30 Packet Sending Method, Device, and System

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/120430 WO2019127597A1 (zh) 2017-12-31 2017-12-31 一种发送报文的方法、设备和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/916,580 Continuation US20200336435A1 (en) 2017-12-31 2020-06-30 Packet Sending Method, Device, and System

Publications (1)

Publication Number Publication Date
WO2019127597A1 true WO2019127597A1 (zh) 2019-07-04

Family

ID=67062935

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/120430 WO2019127597A1 (zh) 2017-12-31 2017-12-31 一种发送报文的方法、设备和系统

Country Status (4)

Country Link
US (1) US20200336435A1 (zh)
EP (1) EP3720069A4 (zh)
CN (2) CN116016371A (zh)
WO (1) WO2019127597A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114024916A (zh) * 2021-12-17 2022-02-08 湖北亿咖通科技有限公司 数据传输方法、装置、计算机可读存储介质及处理器
EP4184887A4 (en) * 2020-07-31 2023-12-13 Huawei Technologies Co., Ltd. METHOD FOR SENDING DATA PACKET, AND NETWORK DEVICE
EP4181480A4 (en) * 2020-07-31 2023-12-20 Huawei Technologies Co., Ltd. DATA PACKET SCHEDULING METHOD AND RELATED APPARATUS
EP4184872A4 (en) * 2020-07-31 2024-01-10 Huawei Technologies Co., Ltd. METHOD, APPARATUS AND DEVICE FOR TRANSMITTING MESSAGES, AND READABLE STORAGE MEDIUM

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3920491B1 (en) * 2019-02-01 2023-11-01 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Service processing method, device and computer readable storage medium
CN112631745B (zh) * 2020-12-02 2024-06-18 微梦创科网络科技(中国)有限公司 一种基于时间窗口的定时任务执行方法、装置及系统
CN112671832A (zh) * 2020-12-03 2021-04-16 中国科学院计算技术研究所 虚拟交换机中保障层次化时延的转发任务调度方法及系统
WO2022237860A1 (zh) * 2021-05-13 2022-11-17 华为云计算技术有限公司 报文处理方法、资源分配方法以及相关设备
EP4393157A1 (en) * 2021-09-30 2024-07-03 MediaTek Singapore Pte. Ltd. Cross-layer optimization in xr-aware ran to limit temporal error propagation
CN114244852B (zh) * 2021-11-09 2024-01-30 北京罗克维尔斯科技有限公司 数据传输方法、装置、设备及存储介质
CN114900476B (zh) * 2022-05-09 2023-06-30 中国联合网络通信集团有限公司 一种数据传输方法、装置、网络设备及存储介质
CN115695317B (zh) * 2022-12-23 2023-04-07 海马云(天津)信息技术有限公司 接入请求的排队和出队方法与装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1725732A (zh) * 2005-06-08 2006-01-25 杭州华为三康技术有限公司 一种报文限速方法
CN101060489A (zh) * 2007-05-16 2007-10-24 杭州华三通信技术有限公司 报文转发方法及装置
WO2012171461A1 (zh) * 2011-06-13 2012-12-20 中兴通讯股份有限公司 报文转发方法及装置
CN104168212A (zh) * 2014-08-08 2014-11-26 北京华为数字技术有限公司 发送报文的方法和装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7426206B1 (en) * 1998-06-11 2008-09-16 Synchrodyne Networks, Inc. Switching system and methodology having scheduled connection on input and output ports responsive to common time reference
US8045563B2 (en) * 2007-12-27 2011-10-25 Cellco Partnership Dynamically adjusted credit based round robin scheduler
CN101730242B (zh) * 2009-11-06 2012-05-30 中国科学院声学研究所 一种上行数据业务带宽请求方法
CN102264116B (zh) * 2011-09-01 2014-11-05 哈尔滨工程大学 一种基于分布式时分多址无线自组网的节点入网方法
JP6127857B2 (ja) * 2013-09-17 2017-05-17 富士通株式会社 トラフィック制御装置
US9705700B2 (en) * 2014-10-21 2017-07-11 Cisco Technology, Inc. Sparse graph coding scheduling for deterministic Ethernet
CN105610727B (zh) * 2015-11-06 2019-02-12 瑞斯康达科技发展股份有限公司 一种网络数据传输方法及装置
EP3301986A1 (en) * 2016-09-30 2018-04-04 Panasonic Intellectual Property Corporation of America Improved uplink resource allocation among different ofdm numerology schemes
US10681615B2 (en) * 2017-06-23 2020-06-09 Schlage Lock Company Llc Predictive rate limiting for reliable bluetooth low energy connections
US11038819B2 (en) * 2017-06-29 2021-06-15 Intel Corporation Technologies for extracting extrinsic entropy for workload distribution

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1725732A (zh) * 2005-06-08 2006-01-25 杭州华为三康技术有限公司 一种报文限速方法
CN101060489A (zh) * 2007-05-16 2007-10-24 杭州华三通信技术有限公司 报文转发方法及装置
WO2012171461A1 (zh) * 2011-06-13 2012-12-20 中兴通讯股份有限公司 报文转发方法及装置
CN104168212A (zh) * 2014-08-08 2014-11-26 北京华为数字技术有限公司 发送报文的方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3720069A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4184887A4 (en) * 2020-07-31 2023-12-13 Huawei Technologies Co., Ltd. METHOD FOR SENDING DATA PACKET, AND NETWORK DEVICE
EP4181480A4 (en) * 2020-07-31 2023-12-20 Huawei Technologies Co., Ltd. DATA PACKET SCHEDULING METHOD AND RELATED APPARATUS
EP4184872A4 (en) * 2020-07-31 2024-01-10 Huawei Technologies Co., Ltd. METHOD, APPARATUS AND DEVICE FOR TRANSMITTING MESSAGES, AND READABLE STORAGE MEDIUM
CN114024916A (zh) * 2021-12-17 2022-02-08 湖北亿咖通科技有限公司 数据传输方法、装置、计算机可读存储介质及处理器

Also Published As

Publication number Publication date
CN111512602B (zh) 2022-12-06
CN116016371A (zh) 2023-04-25
EP3720069A4 (en) 2020-12-02
EP3720069A1 (en) 2020-10-07
US20200336435A1 (en) 2020-10-22
CN111512602A (zh) 2020-08-07

Similar Documents

Publication Publication Date Title
WO2019127597A1 (zh) 一种发送报文的方法、设备和系统
Kalør et al. Network slicing in industry 4.0 applications: Abstraction methods and end-to-end analysis
Gavriluţ et al. AVB-aware routing and scheduling of time-triggered traffic for TSN
Tămaş–Selicean et al. Design optimization of TTEthernet-based distributed real-time systems
Hong et al. Finishing flows quickly with preemptive scheduling
WO2019084970A1 (zh) 报文转发方法、转发设备和网络设备
JP5941309B2 (ja) データネットワークのための集中型トラフィックシェーピング
WO2019157978A1 (zh) 调度报文的方法、第一网络设备及计算机可读存储介质
US20090043934A1 (en) Method of and a System for Controlling Access to a Shared Resource
EP3296884A1 (en) Distributed processing in a network
US20130208593A1 (en) Method and apparatus providing flow control using on-off signals in high delay networks
WO2021063191A1 (zh) 一种报文转发方法、设备及系统
JP7513113B2 (ja) 制御装置、リソース割当方法、及びプログラム
Chaine et al. Egress-TT configurations for TSN networks
US10044632B2 (en) Systems and methods for adaptive credit-based flow
EP4020900B1 (en) Methods, systems, and apparatuses for priority-based time partitioning in time-triggered ethernet networks
US11677673B1 (en) Low latency flow control in data centers
Qian et al. A linux real-time packet scheduler for reliable static sdn routing
Chaine et al. Comparative study of Ethernet technologies for next-generation satellite on-board networks
WO2018157819A1 (zh) 多子流网络传输方法及装置
US20090254676A1 (en) Method for transferring data frame end-to-end using virtual synchronization on local area network and network devices applying the same
Tang et al. Online schedule of sporadic life-critical traffic in ttethernet
Elshuber et al. Dependable and predictable time-triggered Ethernet networks with COTS components
WO2021244404A1 (zh) 数据处理方法及设备
WO2023236832A1 (zh) 数据调度处理方法、设备、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936799

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017936799

Country of ref document: EP

Effective date: 20200703