WO2021180073A1 - Packet transmission method and device, network node, and storage medium - Google Patents

Packet transmission method and device, network node, and storage medium Download PDF

Info

Publication number
WO2021180073A1
WO2021180073A1 PCT/CN2021/079756 CN2021079756W WO2021180073A1 WO 2021180073 A1 WO2021180073 A1 WO 2021180073A1 CN 2021079756 W CN2021079756 W CN 2021079756W WO 2021180073 A1 WO2021180073 A1 WO 2021180073A1
Authority
WO
WIPO (PCT)
Prior art keywords
rate
message
specific queue
identifier
network node
Prior art date
Application number
PCT/CN2021/079756
Other languages
French (fr)
Chinese (zh)
Inventor
杜宗鹏
耿亮
Original Assignee
中国移动通信有限公司研究院
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国移动通信有限公司研究院, 中国移动通信集团有限公司 filed Critical 中国移动通信有限公司研究院
Publication of WO2021180073A1 publication Critical patent/WO2021180073A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/22Negotiating communication rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0231Traffic management, e.g. flow control or congestion control based on communication conditions
    • H04W28/0236Traffic management, e.g. flow control or congestion control based on communication conditions radio quality, e.g. interference, losses or delay

Definitions

  • This application relates to the field of Internet Protocol (IP, Internet Protocol) networks, and in particular to a method, device, network node, and storage medium for message transmission.
  • IP Internet Protocol
  • embodiments of the present application provide a message transmission method, device, network node, and storage medium.
  • the embodiment of the present application provides a message transmission method, which is applied to a first network node, and includes:
  • the first identifier represents the delay-sensitive demand of the first newspaper
  • the specific queue is at least used for buffering the delay-sensitive messages to be sent;
  • the specific queue is shaped, and the first packet is sent;
  • the first network node is a network forwarding node.
  • the first identifier indicates that the first message has an exclusive sending priority.
  • the obtaining the first identifier from the first message includes:
  • SIDs segment identities
  • the first message is set in the specific queue.
  • the obtaining the first identifier from the first message includes:
  • the first packet is set in the specific queue.
  • the using the incoming rate of packets in the specific queue to determine the outgoing rate of packets in the specific queue includes:
  • the ingress rate is combined with the queue depth to determine the outbound rate of packets in the specific queue.
  • the use of the incoming rate in combination with the queue depth to determine the outgoing rate of packets in the specific queue includes:
  • the out rate is a first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than the first value;
  • the outgoing rate is the third rate; the third rate is the preset rate or the last recorded out rate.
  • the embodiment of the present application also provides a message transmission method, including:
  • the second network node obtains the first message; and determines that the service corresponding to the first message is a delay-sensitive service;
  • the second network node sets a first identifier for the first message; the first identifier represents the delay-sensitive demand of the first message;
  • the second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier;
  • the first network node receives the first message, and obtains the first identifier from the received first message;
  • the first network node When the first network node obtains the first identifier, set the received first message in a specific queue; the specific queue is at least used to buffer the delay-sensitive messages to be sent;
  • the first network node uses the incoming rate of the specific queue message to determine the outgoing rate of the specific queue message; and based on the determined outgoing rate, the specific queue is shaped, and the received first message is sent ;in,
  • the second network node is a network edge node; the first network node is a network forwarding node.
  • An embodiment of the present application also provides a message transmission device, which is set on a first network node, and includes:
  • the receiving unit is configured to receive the first message
  • the first acquiring unit is configured to acquire a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first message;
  • the first processing unit is configured to set the first message in a specific queue when the first identifier is obtained; the specific queue is at least used for buffering the delay-sensitive messages to be sent; The incoming rate of messages in the specific queue is determined, and the outgoing rate of messages in the specific queue is determined; and based on the determined outgoing rate, the specific queue is shaped to send out the first message; wherein,
  • the first network node is a network forwarding node.
  • the embodiment of the present application also provides a network node, including: a first communication interface and a first processor; wherein,
  • the first communication interface is configured to receive a first message
  • the first processor is configured to obtain a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first newspaper; when the first identifier is obtained , Setting the first message in a specific queue; the specific queue is at least used for buffering delay-sensitive messages to be sent; and using the incoming rate of the specific queue messages to determine the Out rate; and based on the determined out rate, the specific queue is shaped, and the first message is sent through the first communication interface; wherein,
  • the network node is a network forwarding node.
  • An embodiment of the present application also provides a network node, including: a first processor and a first memory configured to store a computer program that can run on the processor,
  • the first processor is configured to execute the steps of any method on the side of the first network node when running the computer program.
  • the embodiment of the present application also provides a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of any method on the first network node side described above are implemented.
  • the first network node receives the first message; obtains the first identifier from the first message; the first identifier represents the first Sending stationery to delay-sensitive requirements; in the case of obtaining the first identifier, set the first message in a specific queue; the specific queue is at least used for buffering delay-sensitive messages to be sent; Using the incoming rate of messages in the specific queue to determine the outgoing rate of messages in the specific queue; based on the determined outgoing rate, shaping the specific queue to send out the first message; wherein, the first message
  • a network node is a network forwarding node. On the network forwarding node, each low-latency service flow is not recognized.
  • the network forwarding node can ensure that low-latency traffic is sent in an orderly manner on demand, and the network forwarding node's The processing minimizes the packet loss and buffering delay caused by the micro-burst of packets in the network, and can meet the delay requirements of the business.
  • Figure 1 is a schematic diagram of a deterministic network architecture
  • Figure 2 is a schematic diagram of a micro burst of an IP device
  • FIG. 3 is a schematic diagram of an IP network architecture
  • FIG. 4 is a schematic flowchart of a method for message transmission on the side of a second network node in an embodiment of the present application
  • 5a and 5b are schematic diagrams of adjacent SID formats according to an embodiment of this application.
  • Fig. 6 is a schematic diagram of a prefix SID format according to an embodiment of the application.
  • FIG. 7 is a schematic flowchart of a method for message transmission on the side of a first network node in an application embodiment
  • FIG. 8 is a flowchart of a method for message transmission according to an application embodiment
  • FIG. 9 is a schematic diagram of the corresponding relationship between the inbound interface and the outbound interface of a network forwarding node according to an application embodiment of this application;
  • FIG. 10 is a schematic diagram of the relationship between a physical interface and a virtual interface according to an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of a message transmission device according to an embodiment of the application.
  • FIG. 12 is a schematic structural diagram of another message transmission device according to an embodiment of the application.
  • FIG. 13 is a schematic diagram of a network node structure according to an embodiment of this application.
  • FIG. 14 is a schematic diagram of another network node structure according to an embodiment of the application.
  • Time Sensitive Networking evolved from the audio and video bridging (AVB, Ethernet Audio/Video Bridging) used for audio and video networks. It is a protocol set defined by the Institute of Electrical and Electronics Engineers (IEEE). Mainly used for small dedicated networks (Ethernet with a delay of 10 to 100 ⁇ s), such as in-vehicle networks (which can be understood as a network formed by multiple devices installed in a vehicle) or industrial control Network), which has also been defined for larger networks, for example, for fronthaul networks. The main idea is high priority and packet preemption (expressed as Packet in the text).
  • the scheduling mechanism of TSN network mainly includes the following aspects:
  • CBS Credit-based shaper
  • a scheduling mechanism for a queue the queue will get the credit value at the agreed rate, and the credit value is greater than or equal to 0 to send data packets.
  • the credit value will be reduced; the effect of this shaper is: to shape the packets in the queue and send them one by one at the agreed rate (also called pacing); after shaping, this kind of traffic will generally coexist with BE traffic on the sending port, and This kind of traffic requires a higher priority to ensure that it is not interfered by BE traffic, so as to maintain the effect of the previous queue shaping;
  • Time Sensitive Queues (TSQ, Time Sensitive Queues): Using a gating mechanism, all queues on the device use a circular scheduling mechanism (based on a gating table with a granularity of ns to control the opening and closing of the queue), The synchronization between devices relies on the Precision Time Protocol (PTP), through the coordination of each device on the path, the gate can be opened and closed accurately, and the fastest forwarding of TSN traffic is supported;
  • PTP Precision Time Protocol
  • Transmission preemption packet preemption strategy, supports high-priority packets to interrupt the low-priority packets being sent;
  • Ingress scheduling and cyclic queue forwarding (CQF, Input scheduling and Cyclic Queuing and Forwarding) mechanism arrive at the correct time window at the entrance, and then ensure that it is sent from the exit at a certain time window, using several queues that cycle periodically as the sending queue ;
  • CQF Input scheduling and Cyclic Queuing and Forwarding
  • Urgency-based scheduling also known as Asynchronous Traffic Shaping (ATS, Asynchronous Traffic Shaping):
  • ATS Asynchronous Traffic Shaping
  • the mechanism currently supports two shaping methods, namely, Length Rate Ratio (LRQ, Length Rate Quotient) and Token Bucket Emulation (TBE, Token Bucket Emulation).
  • LRQ Length Rate Ratio
  • TBE Token Bucket Emulation
  • the scheduling effects of these two methods are similar to CBS. , Are applicable to the pacing of queue traffic;
  • Packet Replication and Elimination Multiple transmission and selective reception, also called Frame Replication and Elimination (FRER, Frame Replication and Elimination).
  • Deterministic demand is by no means limited to the local two-layer network. More than 20 authors from different organizations have jointly written a Use Case manuscript, which elaborates the needs in nine major industries, including: professional audio and video ( pro audio&video), electrical utilities, building automation systems, wireless for industrial, cellular radio, industrial machine-to-machine communication (industrial M2M), mining (mining industry) , Private blockchain and network slicing (private blockchain and network slicing); at the same time, the scale of demand scenarios may be very large, including a national network, a large number of equipment and ultra-long distances. Based on this, Deterministic Networking (DetNet, Deterministic Networking) was created. DetNet is a deterministic network architecture defined by the Internet Engineering Task Force (IETF). It focuses on the determinism of the three-layer network and expands the capabilities of TSN from the second layer. Go to the third floor, as shown in Figure 1.
  • IETF Internet Engineering Task Force
  • TSN is characterized by its small network scale and relatively simple traffic model, which supports the identification of each stream, or network synchronization; therefore, the related mechanism of TSN is mainly developed for small-scale networks. In large-scale networks, it is difficult to directly It is applied to IP forwarding equipment; moreover, the processing mechanism of deterministic traffic is relatively complicated by the relevant mechanism of TSN.
  • the current scheduling mechanism of IP networks belongs to the weighted round-robin (WRR) type.
  • WRR weighted round-robin
  • the core idea of packet forwarding is to send data packets as quickly as possible.
  • the core indicators are line speed and Throughput rate.
  • the line rate refers to: for a certain type of message, such as a series of 128 bytes (bytes), the device can achieve port rate entry and port rate transmission when forwarding.
  • the current scheduling of IP devices ie network nodes
  • the current scheduling of IP devices ie network nodes
  • the current scheduling of IP devices ie network nodes
  • the current scheduling of IP devices ie network nodes
  • pacing is good before scheduling.
  • the messages will be gathered together. Therefore, in the design concept, the scheduling mechanism of IP network does not match some requirements of TSN and DetNet (100% reliability and deterministic delay must be guaranteed), so it is difficult to directly apply TSN scheduling in some details. mechanism.
  • the characteristics of IP forwarding as the old mechanism are statistical multiplexing, low cost, large throughput, and the design concept conforms to the burst traffic model of IP; the scheduling characteristic of TSN as the new mechanism is determinism. For specific traffic, it achieves a scheduling effect similar to constant bit rate (CBR).
  • CBR constant bit rate
  • Network forwarding nodes that is, network nodes that forward packets, which can also be called P nodes, or P routers, or intermediate routers, as shown in Figure 3
  • P nodes network nodes that forward packets
  • P routers or intermediate routers, as shown in Figure 3
  • the quotient node to identify the business flow by flow.
  • the network forwarding node (generally, the forwarding pressure is high) cannot be used to analyze each flow (Flow). This is because: the network forwarding node is generally responsible for forwarding according to the packet, which is not suitable for supporting too many flows. Identification work.
  • the number of flows on the network forwarding node is large. If a single flow changes, it is not recommended to constantly change the bandwidth reservation of the network forwarding node on the control plane. This also follows the current design concept of IP forwarding routers: In the IP network, there are many bursts of traffic. At this time, it is recommended that even if a certain/some flow access requirements change, the network forwarding nodes do not have to perceive them all.
  • each low-latency flow is not identified, and only the flow with low-latency requirements is identified as a whole based on the characteristics of the packet, so as to ensure low-latency Traffic is sent on demand.
  • the embodiment of the present application provides a message transmission method, which is applied to a second network node. As shown in FIG. 4, the method includes:
  • Step 401 The second network node obtains the first message; and determines that the service corresponding to the first message is a delay-sensitive service;
  • Step 402 The second network node sets a first identifier for the first message
  • the first identifier represents the time delay sensitive demand of the first newspaper.
  • Step 403 The second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier.
  • the second network node is a network edge node, which may be referred to as a PE node, a PE router, etc., such as an operator edge node in a backbone network.
  • step 401 when the first packet is accessed from a specific virtual local area network (VLAN) or from a specific interface, or when the first packet is accessed from a certain If a specific interface or VLAN is accessed and has an agreed priority, the second network node considers the service corresponding to the first message to be a delay-sensitive service, that is, a low-latency service.
  • a delay-sensitive service that is, a low-latency service.
  • other methods may also be used to identify that the service corresponding to the first message is a delay-sensitive service, which is not limited in the embodiment of the present application.
  • the first identifier characterizes the delay-sensitive requirement of the first newspaper and stationery, and can also be understood as the first identifier characterizing that the type of the first message is a delay-sensitive type.
  • the first identifier may be a priority identifier, which may identify that low-latency traffic occupies a higher exclusive priority, such as 6.
  • the first identifier indicates that the first packet has an exclusive sending priority.
  • the first identifier may also be an adjacent SID of a segment routing (SR) (it can be expressed as adj SID in English, and the type of adjacent SID is the End.X function in SRv6).
  • SR segment routing
  • Figures 5a and 5b show the formats of two adjacent SIDs. In practical applications, you can specify a specific algorithm in the algorithm section, such as 200, which is used for low-latency traffic transmission.
  • the first identifier may also be a prefix SID of an SR (English may be expressed as Prefix SID, and the type of the prefix SID is End function in SRv6).
  • the prefix SID in the format shown in FIG. 5a is suitable for a point-to-point (P2P) connection scenario (Type: 43), and the prefix SID in the format shown in FIG. 5b is suitable for a local area network (LAN) scenario (Type: 44).
  • P2P point-to-point
  • LAN local area network
  • Figure 6 shows the format of a prefix SID.
  • the End function when it is released, it is a sub-TLV of the Locator TLV in the Intermediate System to Intermediate System (ISIS). Part, you can specify a specific algorithm, such as 200, which is used for low-latency traffic transmission.
  • ISIS Intermediate System to Intermediate System
  • the second network node may perform separate shaping for each access delay-sensitive service, so that the packets are sent out one by one at a certain rate (the rate required by the service flow);
  • the second network node can obtain the rate through a certain implementation method, for example, through manual/network management configuration (ie static configuration), or control plane transfer mode (which can also be understood as control plane notification), or Data plane transfer method (also called data plane notification method).
  • the method for shaping the second network node may use CBS, or LRQ, TBE of ATS, etc., which is not limited in the embodiment of the present application.
  • an embodiment of the present application also provides a message transmission method, which is applied to a first network node, as shown in FIG. 7, including:
  • Step 701 Receive the first message
  • Step 702 Obtain the first identifier from the first message
  • the first identifier represents the time delay sensitive demand of the first newspaper.
  • Step 703 When the first identifier is obtained, set the first message in a specific queue;
  • the specific queue is at least used for buffering (also can be understood as placing) delay-sensitive messages to be sent.
  • Step 704 Use the incoming rate of packets in the specific queue to determine the outgoing rate of packets in the specific queue;
  • Step 705 Shape the specific queue based on the determined output rate, and then send the first message.
  • the first network node is a network forwarding node, which may be called a P node, a P router, etc., such as an operator node in a backbone network.
  • the first network node receives the first message from a previous hop node
  • the previous hop node may be a second network node or a network forwarding node.
  • the first network node configures a special queue to uniformly shape all delay-sensitive messages Send out later, that is, all packets with the exclusive priority will be aggregated into this queue.
  • the first network node When the first identifier is an adjacent SID of an SR, the first network node recognizes that the current SID is issued by itself, and forwards the message to a pre-configured specific queue to unify all delay-sensitive messages Send after shaping, that is, in actual application, all the packets whose current SID is the adjacent SID will be aggregated into this queue.
  • the SID list is obtained from the first message; the SID list includes multiple SIDs corresponding to the traffic engineering path;
  • the first message is set in the specific queue.
  • the current SID refers to: the SID corresponding to the destination address (DA) in the received first message; accordingly, the first message
  • the next-hop network node corresponding to the text refers to the network node corresponding to the DA in the first packet, that is, the second network node that receives the first packet.
  • the second network node When the first identifier is a prefix SID of an SR, the second network node recognizes that the SID is not issued by itself, searches the routing and forwarding table according to the SID, and the outbound interface is in a pre-configured specific queue, thereby forwarding the message To a specific queue, in this queue, all delay-sensitive packets will be uniformly shaped and sent out. That is, in actual applications, all packets whose outbound interface is the specific queue in the routing table will be aggregated into this queue.
  • the prefix SID is obtained from the first message
  • the first packet is set in the specific queue.
  • the output speed of the message is limited according to the "incoming rate of the packet of the destination virtual interface".
  • each port is based on the received low-latency traffic
  • the speed of the message that is, the incoming rate of the virtual interface
  • send these low-latency traffic messages while ensuring a certain buffer (buffer) depth (can also be understood as a certain message buffer size), rather than according to the previous
  • the mechanism of forwarding as soon as possible ie BE forwarding mechanism).
  • step 705 may include:
  • the queue depth can be understood as the number of packets in the queue.
  • the using the inbound rate in combination with the queue depth to determine the outbound rate of packets in the specific queue includes:
  • the out rate is a first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than the first value;
  • the outgoing rate is the third rate; the third rate is the preset rate or the last recorded out rate.
  • a statistical period for determining the message rate can be set, for example, 20 ⁇ s, counting the number of messages entering the specific queue within 20 ⁇ s, and determining the incoming rate based on this.
  • the first value can be set according to needs, as long as the incoming rate is slightly greater than the outgoing rate.
  • the outbound rate may be the fourth rate, and the fourth rate is the set outbound rate threshold (It can be set as required, and is less than the input rate).
  • the preset rate can be set as required.
  • the shaping method of the first network node may adopt CBS, or LRQ, TBE of ATS, etc., which is not limited in the embodiment of the present application.
  • the embodiment of the present application also provides a message transmission method. As shown in FIG. 8, the method includes:
  • Step 801 The second network node obtains the first message; and determines that the service corresponding to the first message is a delay-sensitive service;
  • Step 802 The second network node sets a first identifier for the first message; the first identifier represents the delay-sensitive demand of the first message;
  • Step 803 The second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier;
  • Step 804 The first network node receives the first message, and obtains the first identifier from the received first message, and when the first identifier is obtained, sets the received first message in a specific queue;
  • the specific queue is at least used for buffering delay-sensitive messages to be sent;
  • Step 805 The first network node uses the incoming rate of the specific queue message to determine the outgoing rate of the specific queue message; and based on the determined outgoing rate, the specific queue is shaped, and then the received The first message.
  • the second network node is a network edge node; the first network node is a network forwarding node.
  • a first network node receives a first message; obtains a first identifier from the first message; and the first identifier represents the delay-sensitive requirement of the first message
  • the first message is set in a specific queue; the specific queue is at least used for buffering delay-sensitive messages to be sent; using the specific queue message Incoming rate, determining the outgoing rate of messages in the specific queue; based on the determined outgoing rate, shaping the specific queue, and sending out the first message;
  • the first network node is a network forwarding node; On the network forwarding node, each low-latency service flow is not identified, and only the flow with low-latency requirements is identified as a whole based on the characteristics of the packet, and the packets with low-latency requirements are set in a specific queue for shaping.
  • the network forwarding node can ensure that low-latency traffic is sent in an orderly manner, and the processing of the network forwarding node minimizes the number of packets in the network
  • the packet loss and buffer delay caused by the micro-burst can meet the delay requirements of the business.
  • the transmission of low-latency services uses exclusive priority.
  • low-latency traffic and BE traffic use the same destination IP. At this time, low-latency traffic needs to be distinguished, and these low-latency traffic need exclusive priority.
  • the message transmission process of this application embodiment includes:
  • Step 1 The network edge node (such as the PE1 node) identifies the flow of the packet packet Packet1 according to the related technology, and shapes the queue where the packet packet Packet1 is located, and confirms that the correct priority is assigned, the exclusive priority, such as Priority 6;
  • the packet packet Packet1 is a low-latency flow.
  • Shaping can be done by CBS or ATS.
  • Step 2 The packet packet Packet1 arrives at the inbound interface Iin1 of the network forwarding node P1, and the network forwarding node P1 searches for the outbound interface as Iout3 according to the DA of the packet packet Packet1;
  • a specific queue corresponding to Priority 6 is configured on Iout3. This specific queue serves low-latency traffic, and this specific queue is associated with Priority 6.
  • the priority of the packet packet Packet1 is Priority 6. Therefore, the outbound interface is a specific queue pre-configured on Iout3 to serve low-latency traffic.
  • the network forwarding node P1 has four interfaces, and a board has only one interface.
  • the processing methods for multiple interfaces of a board are similar.
  • Step 3 The packet packet Packet1 arrives at the outbound interface Iout3 of the network forwarding node P1.
  • Iout3 all Priority 6 packets are placed in the same queue (ie a specific queue) (the source of the packet may be the inbound interface Iin1, Iin2, Iin4, That is, as long as the packets with priority 6 are all placed in the specific queue), they are shaped (such as CBS, ATS, etc.) according to the overall incoming rate as the outgoing rate.
  • an appropriate period of packet rate statistics can be set, for example, statistics are performed once every 20 ⁇ s.
  • the speed limit is set at the network entrance
  • the proportion of low-latency traffic at this time is not high in the entire network, mainly BE traffic.
  • Iout3 For the specific queue where all Priority 6 packets are located, it is scheduled with other queues (that is, the queue of BE traffic, for example, the traffic of Priority 0 is BE traffic) according to the relevant mechanism, and sent from Iout3, such as low-latency traffic
  • the priority is higher, so a message will be sent out as soon as there is a specific queue.
  • the low-latency service uses a dedicated adjacent SID.
  • the message transmission process of this application embodiment includes:
  • Step 1 The network edge node (such as the PE1 node) identifies the flow of the packet packet Packet1 according to related technologies, and shapes the queue where the packet packet Packet1 is located, and confirms that the correct SID list is marked (these SIDs are strict Engineering traffic (TE) path, and point to a specific queue resource on each node);
  • TE strict Engineering traffic
  • the packet packet Packet1 is a low-latency flow.
  • Shaping can be done by CBS or ATS.
  • Step 2 The packet packet Packet1 arrives at the inbound interface Iin1 of the network forwarding node P1, the current SID is SID13, and the forwarding device searches for the specific queue Queue3 whose outbound interface is Iout3 according to SID13;
  • the network forwarding node P1 has four interfaces, and a board has only one interface.
  • the processing methods for multiple interfaces of a board are similar.
  • Step 3 The packet packet Packet1 arrives at the outbound interface Iout3 of the network forwarding node P1, and the specific queue Queue3 of Iout3 is shaped (such as CBS, ATS, etc.) according to the overall queue ingress rate as the outbound rate.
  • the specific queue Queue3 of Iout3 is shaped (such as CBS, ATS, etc.) according to the overall queue ingress rate as the outbound rate.
  • the specific queue Queue3 where all SID13 messages are located is scheduled with other queues according to the existing mechanism and sent from Iout3.
  • these special adjacent SIDs can be notified to the network for network programming to meet the demands of low-latency services.
  • the low-latency service uses a special prefix SID.
  • SIDs can be used for low-latency traffic and BE traffic, so that low-latency traffic and BE traffic can be distinguished directly according to SID.
  • SID short-latency traffic
  • BE traffic can be distinguished directly according to SID.
  • the message transmission process of this application embodiment includes:
  • Step 1 At the network edge node (such as the PE1 node), according to the related technology, identify the traffic of the packet packet 1, and shape the queue where the packet packet 1 is located, and confirm that the correct prefix SID is marked (in the SRv6 scenario
  • the location (Locator) corresponding to the SID has a related identifier. For example, Flex Algo ID is used.
  • the outbound interface of its forwarding entry points to a specific queue resource on each node.
  • the node generates a related forwarding entry, and the Locator is the address part of the SID in SRv6, which is used to route to the node that publishes the SID);
  • the packet packet Packet1 is a low-latency flow.
  • the difference from Application Example 2 is that the adjacent SID generally needs to be specified per hop, and is a label stack (SR-TE), here only one SID (SR-BE) is required, which is a global label (for example, corresponding to a PE node) .
  • SR-TE label stack
  • SR-BE SID
  • SR-BE global label
  • Step 2 The packet packet Packet1 arrives at the inbound interface Iin1 of the network forwarding node P1, looks up the forwarding table according to SID9, and the outbound interface is the specific queue Queue3 of Iout3;
  • the network forwarding node P1 has four interfaces, and a board has only one interface.
  • the processing methods for multiple interfaces of a board are similar.
  • prefix SID of the board where the inbound interface Iin1 of the device is located may also correspond to the queue of the outbound interface as Queue3. This prefix SID is similar to SID9 and represents low-latency traffic;
  • the inbound interfaces Iin2 and Iin4 of the device may also receive packets with SID9 or other prefixed SID packets.
  • the queue corresponding to the outbound interface is Queue3.
  • the packets corresponding to the low-latency traffic will be set in the specific queue Queue3.
  • Step 3 The packet packet Packet1 arrives at the outbound interface Iout3 of the forwarding node P1 of the network.
  • the overall queue ingress rate is used as the outbound rate for shaping (such as CBS, ATS, etc.) to be sent.
  • Queue3 and other queues are scheduled according to the existing mechanism and sent from Iout3.
  • these special prefix SIDs can be advertised to the network for network programming to meet the demands of low-latency services.
  • each port of each node can be configured with a specific queue, which becomes the interface of these special prefix SIDs, and supports the forwarding of low-latency traffic.
  • the application embodiment constructs a mechanism to ensure low-latency transmission in a larger IP three-layer network, which specifically includes:
  • Network edge nodes perform traffic identification and speed limit
  • Each port of the network forwarding node monitors the speed of the received low-latency traffic, shapes and forwards low-latency packets according to the monitored speed.
  • each physical outbound interface is divided into a specific virtual interface, corresponding to a specific queue provided for all low-latency traffic, and according to the aggregated low-latency traffic , Perform plastic surgery.
  • low-latency traffic can be guaranteed to be sent on demand, and the processing of the network forwarding node minimizes the micro-burst of messages, and maintains a suitable buffer depth for low-latency flows.
  • the identification mechanism used in the above mechanism to identify low-latency traffic includes:
  • the first method uses exclusive IP/Multiprotocol Label Switching (MPLS) priority to represent low-latency traffic, which is suitable for IP/MPLS networks.
  • MPLS Multiprotocol Label Switching
  • the second method using a specific SID to represent low-latency traffic, is suitable for SR networks.
  • the specific SID may specifically be the adjacent SID of the SR-TE or the node SID of the SR-BE (that is, the prefix SID).
  • a relatively simple mechanism (1. identification and overall identification of low-latency traffic; 2. special shaping and forwarding of low-latency traffic) is adopted to reduce the formation of micro-bursts in IP forwarding (Messages will not form bursts due to being forwarded as soon as possible) to ensure fast and orderly forwarding of low-latency traffic in the IP network (messages are forwarded in the pacing mode as far as possible); when in a larger network, the main time is When the delay is optical fiber propagation, as long as the network needs to ensure normal forwarding as much as possible (that is, forwarding messages in an orderly manner so that the messages should not be gathered together as much as possible), and not forming micro bursts that cause packet loss, it can provide services with lower delay; There is no need to introduce too complicated flow recognition or too much state control.
  • the embodiment of the present application also provides a message transmission device, which is set on a first network node. As shown in FIG. 11, the device includes:
  • the receiving unit 111 is configured to receive the first message
  • the first acquiring unit 112 is configured to acquire a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first message;
  • the first processing unit 113 is configured to, when the first identifier is obtained, set the first message in a specific queue; the specific queue is at least used to buffer the delay-sensitive messages to be sent; Determining the incoming rate of messages in the specific queue, determining the outgoing rate of messages in the specific queue; and shaping the specific queue based on the determined outgoing rate, and sending out the first message; wherein,
  • the first network node is a network forwarding node.
  • the first obtaining unit 112 is configured to obtain an SID list from the first message; the SID list includes multiple SIDs corresponding to the traffic engineering path;
  • the first processing unit 113 is configured to set the first packet in the first packet when it is determined that the current SID indicates a specific queue on the next-hop network node corresponding to the first packet. In a specific queue.
  • the first obtaining unit 112 is configured to:
  • the first processing unit 113 is configured to set the first packet in the specific queue when the found outbound interface corresponds to the specific queue.
  • the first processing unit 113 is configured to:
  • the ingress rate is combined with the queue depth to determine the outbound rate of packets in the specific queue.
  • the using the inbound rate in combination with the queue depth to determine the outbound rate of packets in the specific queue includes:
  • the first processing unit 113 determines that the out rate is the first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than First value
  • the first processing unit 113 determines that the out rate is a second rate; the second rate is equal to the in rate.
  • the first processing unit 113 determines that the outgoing rate is a third rate; the third rate is a preset rate or the last recorded out rate.
  • the receiving unit 111 can be implemented by a communication interface in a message transmission device; the first acquiring unit 112 and the first processing unit 113 can be implemented by a processor in the message transmission device.
  • the embodiment of the present application also provides a message transmission device, which is set on the second network node. As shown in FIG. 12, the device includes:
  • the second obtaining unit 121 is configured to obtain the first message; and determine that the service corresponding to the first message is a delay-sensitive service;
  • the second processing unit 122 is configured to set a first identifier for the first message; the first identifier represents the delay-sensitive requirement of the first message; and the first message with the first identifier is set.
  • the message is set in a specific queue for shaping, and then the first message with the first identifier is sent.
  • the second acquisition unit 121 may be implemented by a processor in a message transmission device in combination with a communication interface; the second processing unit 122 may be implemented by a processor in the message transmission device.
  • the message transmission device provided in the above embodiment performs message transmission
  • only the division of the above-mentioned program modules is used as an example for illustration.
  • the above-mentioned processing can be allocated to different program modules according to needs.
  • Complete that is, divide the internal structure of the device into different program modules to complete all or part of the processing described above.
  • the message transmission device provided in the foregoing embodiment and the message transmission method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • the embodiment of the present application also provides a network node.
  • the network node 130 includes:
  • the first communication interface 131 can exchange information with other network nodes;
  • the first processor 132 is connected to the first communication interface 131 to implement information interaction with other network nodes, and is configured to execute the method provided by one or more technical solutions on the first network node side when it is configured to run a computer program.
  • the computer program is stored in the first memory 133.
  • the first communication interface 131 is configured to receive the first message
  • the first processor 132 is configured to obtain a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first newspaper; when the first identifier is obtained Next, the first message is set in a specific queue; the specific queue is used at least to buffer the delay-sensitive messages to be sent; and the incoming rate of the specific queue message is used to determine the specific queue message And based on the determined output rate, the specific queue is shaped, and the first message is sent out through the first communication interface; wherein,
  • the network node is a network forwarding node.
  • the first processor 132 is configured to:
  • the SID list contains multiple SIDs corresponding to the traffic engineering path
  • the first message is set in the specific queue.
  • the first processor 132 is configured to:
  • the first packet is set in the specific queue.
  • the first processor 132 is configured to:
  • the ingress rate is combined with the queue depth to determine the outbound rate of packets in the specific queue.
  • the using the inbound rate in combination with the queue depth to determine the outbound rate of packets in the specific queue includes:
  • the first processor 132 determines that the out rate is a first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than First value
  • the first processor 132 determines that the out rate is a second rate; the second rate is equal to the in rate.
  • the first processor 132 determines that the out rate is a third rate; the third rate is a preset rate or the last recorded out rate.
  • bus system 134 is configured to implement connection and communication between these components.
  • bus system 134 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the bus system 134 in FIG. 13.
  • the first memory 133 in the embodiment of the present application is configured to store various types of data to support the operation of the network node 130. Examples of such data include: any computer program used to operate on the network node 130.
  • the methods disclosed in the foregoing embodiments of the present application may be applied to the first processor 132 or implemented by the first processor 132.
  • the first processor 132 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the first processor 132 or instructions in the form of software.
  • the aforementioned first processor 132 may be a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, and the like.
  • the first processor 132 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or any conventional processor or the like.
  • the software module may be located in a storage medium, and the storage medium is located in the first memory 133.
  • the first processor 132 reads the information in the first memory 133 and completes the steps of the foregoing method in combination with its hardware.
  • the network node 130 may be configured by one or more Application Specific Integrated Circuits (ASIC, Application Specific Integrated Circuit), DSP, Programmable Logic Device (PLD, Programmable Logic Device), and Complex Programmable Logic Device (CPLD). , Complex Programmable Logic Device, Field-Programmable Gate Array (FPGA, Field-Programmable Gate Array), general-purpose processor, controller, microcontroller (MCU, Micro Controller Unit), microprocessor (Microprocessor), or other electronics Component implementation, used to perform the aforementioned method.
  • ASIC Application Specific Integrated Circuit
  • DSP Programmable Logic Device
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • MCU Microcontroller
  • Microprocessor Microprocessor
  • the embodiment of the present application also provides a network node.
  • the network node 140 includes:
  • the second communication interface 141 can exchange information with other network nodes
  • the second processor 142 is connected to the second communication interface 141 to implement information interaction with other network nodes, and is configured to execute the method provided by one or more technical solutions on the second network node side when it is configured to run a computer program.
  • the computer program is stored in the second storage 143.
  • the second communication interface 141 is configured to obtain the first message
  • the second processor 142 is configured to:
  • the service corresponding to the first message is a delay-sensitive service; and the first message with the first identifier is set in a specific queue for shaping, and then sent through the second communication interface 141 with the first identifier The first message.
  • bus system 144 is configured to implement connection and communication between these components.
  • bus system 144 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the bus system 144 in FIG. 14.
  • the second memory 143 in the embodiment of the present application is configured to store various types of data to support the operation of the network node 140. Examples of such data include: any computer program used to operate on the network node 140.
  • the method disclosed in the foregoing embodiment of the present application may be applied to the second processor 142 or implemented by the second processor 142.
  • the second processor 142 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the second processor 142 or instructions in the form of software.
  • the aforementioned second processor 142 may be a general-purpose processor, a DSP, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like.
  • the second processor 142 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or any conventional processor or the like.
  • the software module may be located in a storage medium, and the storage medium is located in the second memory 143.
  • the second processor 142 reads the information in the second memory 143 and completes the steps of the foregoing method in combination with its hardware.
  • the network node 140 may be implemented by one or more ASICs, DSPs, PLDs, CPLDs, FPGAs, general-purpose processors, controllers, MCUs, Microprocessors, or other electronic components for performing the aforementioned methods.
  • the memory (the first memory 133, the second memory 143) of the embodiment of the present application may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memory.
  • the non-volatile memory can be a read-only memory (ROM, Read Only Memory), a programmable read-only memory (PROM, Programmable Read-Only Memory), an erasable programmable read-only memory (EPROM, Erasable Programmable Read- Only Memory, Electrically Erasable Programmable Read-Only Memory (EEPROM), Ferromagnetic Random Access Memory (FRAM), Flash Memory, Magnetic Surface Memory , CD-ROM, or CD-ROM (Compact Disc Read-Only Memory); magnetic surface memory can be magnetic disk storage or tape storage.
  • the volatile memory may be a random access memory (RAM, Random Access Memory), which is used as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • SSRAM synchronous static random access memory
  • Synchronous Static Random Access Memory Synchronous Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM Enhanced Synchronous Dynamic Random Access Memory
  • SLDRAM synchronous connection dynamic random access memory
  • DRRAM Direct Rambus Random Access Memory
  • the memories described in the embodiments of the present application are intended to include, but are not limited to, these and any other suitable types of memories.
  • the embodiment of the present application also provides a message transmission system.
  • the system includes a plurality of first network nodes and second network nodes.
  • the embodiment of the present application also provides a storage medium, that is, a computer storage medium, specifically a computer-readable storage medium, such as a first memory 133 storing a computer program, which can be used by the network node 130 Is executed by the first processor 132 to complete the steps described in the foregoing first network node-side method.
  • a second memory 143 storing a computer program is included.
  • the computer program can be executed by the second processor 142 of the network node 140 to complete the steps described in the second network node-side method.
  • the computer-readable storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present application discloses a packet transmission method and device, a network node, and a storage medium. The method comprises: a first network node receiving a first packet; acquiring a first identifier from the first packet, the first identifier being used to indicate that the first packet has a delay-sensitive requirement; upon acquiring the first identifier, configuring the first packet in a specific queue, the specific queue at least being used to buffer delay-sensitive packets to be sent; determining a packet outgoing rate of the specific queue by using a packet incoming rate of the specific queue; and shaping the specific queue on the basis of the determined outgoing rate, and sending the first packet, wherein the first network node is a forwarding node in a network.

Description

报文传输方法、装置、网络节点及存储介质Message transmission method, device, network node and storage medium
相关申请的交叉引用Cross-references to related applications
本申请基于申请号为202010157089.8、申请日为2020年03月09日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application is filed based on a Chinese patent application with an application number of 202010157089.8 and an application date of March 9, 2020, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is hereby incorporated into this application by reference.
技术领域Technical field
本申请涉及互联网协议(IP,Internet Protocol)网络领域,尤其涉及一种报文传输方法、装置、网络节点及存储介质。This application relates to the field of Internet Protocol (IP, Internet Protocol) networks, and in particular to a method, device, network node, and storage medium for message transmission.
背景技术Background technique
目前,在IP网络中,按照尽力而为(BE,Best Effort)的基本思想转发报文,所以就很难保证确定性时延,因此在一些第五代移动通信技术(5G)超高可靠与低时延通信(URLLC,Ultra-Reliable and Low Latency Communications)场景中,无法满足业务确定性的时延需求。At present, in IP networks, messages are forwarded according to the basic idea of Best Effort (BE, Best Effort), so it is difficult to guarantee deterministic delay. Therefore, some fifth-generation mobile communication technologies (5G) are extremely reliable and reliable. In low-latency communications (URLLC, Ultra-Reliable and Low Latency Communications) scenarios, the deterministic latency requirements of services cannot be met.
发明内容Summary of the invention
为解决相关技术问题,本申请实施例提供一种报文传输方法、装置、网络节点及存储介质。In order to solve related technical problems, embodiments of the present application provide a message transmission method, device, network node, and storage medium.
本申请实施例的技术方案是这样实现的:The technical solutions of the embodiments of the present application are implemented as follows:
本申请实施例提供了一种报文传输方法,应用于第一网络节点,包括:The embodiment of the present application provides a message transmission method, which is applied to a first network node, and includes:
接收第一报文;Receive the first message;
从第一报文中获取第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;Obtain the first identifier from the first message; the first identifier represents the delay-sensitive demand of the first newspaper;
在获取到所述第一标识的情况下,将所述第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;In the case of obtaining the first identifier, set the first message in a specific queue; the specific queue is at least used for buffering the delay-sensitive messages to be sent;
利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;Using the incoming rate of packets in the specific queue to determine the outgoing rate of packets in the specific queue;
基于确定的出速率,对所述特定队列进行整形,发出所述第一报文;其中,Based on the determined output rate, the specific queue is shaped, and the first packet is sent; wherein,
所述第一网络节点为网络转发节点。The first network node is a network forwarding node.
上述方案中,所述第一标识表征所述第一报文具有独占发送优先级。In the above solution, the first identifier indicates that the first message has an exclusive sending priority.
上述方案中,所述从第一报文中获取第一标识,包括:In the above solution, the obtaining the first identifier from the first message includes:
从所述第一报文中获取段身份标识清单(SID list);SID list包含流量工程路径对应的多个SID;Obtain a list of segment identities (SIDs) from the first message; the SID list contains multiple SIDs corresponding to the traffic engineering path;
在确定当前SID指示所述第一报文对应的下一跳网络节点上的特定队列的情况下,将所述第一报文设置在所述特定队列中。When it is determined that the current SID indicates a specific queue on the next-hop network node corresponding to the first message, the first message is set in the specific queue.
上述方案中,所述从第一报文中获取第一标识,包括:In the above solution, the obtaining the first identifier from the first message includes:
从所述第一报文中获取前缀SID;Obtain the prefix SID from the first message;
在路由转发表中查找与获取的前缀SID对应的出接口;Look up the outbound interface corresponding to the obtained prefix SID in the routing and forwarding table;
在查找到的出接口对应所述特定队列的情况下,将所述第一报文设置在所述特定队列中。In a case where the found outgoing interface corresponds to the specific queue, the first packet is set in the specific queue.
上述方案中,所述利用所述特定队列报文的入速率,确定所述特定队列报文的出速率,包括:In the above solution, the using the incoming rate of packets in the specific queue to determine the outgoing rate of packets in the specific queue includes:
利用所述入速率,并结合队列深度,确定所述特定队列报文的出速率。The ingress rate is combined with the queue depth to determine the outbound rate of packets in the specific queue.
上述方案中,所述利用所述入速率,并结合队列深度,确定所述特定队列报文的出速率,包括:In the above solution, the use of the incoming rate in combination with the queue depth to determine the outgoing rate of packets in the specific queue includes:
当队列深度小于阈值时,确定所述出速率为第一速率;所述第一速率小于所述入速率,且所述入速率与所述第一速率的差值小于第一值;When the queue depth is less than the threshold, determine that the out rate is a first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than the first value;
或者,or,
当队列深度等于阈值时,确定所述出速率为第二速率;所述第二速率等于所述入速率;When the queue depth is equal to the threshold, it is determined that the out rate is the second rate; the second rate is equal to the in rate;
或者,or,
当所述入速率为零时,确定所述出速率为第三速率;所述第三速率为预设速率、或为最后记录的出速率。When the incoming rate is zero, it is determined that the outgoing rate is the third rate; the third rate is the preset rate or the last recorded out rate.
本申请实施例还提供了一种报文传输方法,包括:The embodiment of the present application also provides a message transmission method, including:
第二网络节点获取第一报文;并确定第一报文对应的业务为时延敏感业务;The second network node obtains the first message; and determines that the service corresponding to the first message is a delay-sensitive service;
所述第二网络节点为所述第一报文设置第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;The second network node sets a first identifier for the first message; the first identifier represents the delay-sensitive demand of the first message;
所述第二网络节点将设置有第一标识的第一报文设置在特定的队列中进行整形,之后发出设置有第一标识的第一报文;The second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier;
第一网络节点接收第一报文,并从接收的第一报文中获取第一标识;The first network node receives the first message, and obtains the first identifier from the received first message;
所述第一网络节点在获取到第一标识的情况下,将接收的第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;When the first network node obtains the first identifier, set the received first message in a specific queue; the specific queue is at least used to buffer the delay-sensitive messages to be sent;
所述第一网络节点利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;并基于确定的出速率,对所述特定队列进行整形,发出接收的第一报文;其中,The first network node uses the incoming rate of the specific queue message to determine the outgoing rate of the specific queue message; and based on the determined outgoing rate, the specific queue is shaped, and the received first message is sent ;in,
所述第二网络节点为网络边缘节点;所述第一网络节点为网络转发节点。The second network node is a network edge node; the first network node is a network forwarding node.
本申请实施例还提供了一种报文传输装置,设置在第一网络节点上, 包括:An embodiment of the present application also provides a message transmission device, which is set on a first network node, and includes:
接收单元,配置为接收第一报文;The receiving unit is configured to receive the first message;
第一获取单元,配置为从第一报文中获取第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;The first acquiring unit is configured to acquire a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first message;
第一处理单元,配置为在获取到所述第一标识的情况下,将所述第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;以及基于确定的出速率,对所述特定队列进行整形,发出所述第一报文;其中,The first processing unit is configured to set the first message in a specific queue when the first identifier is obtained; the specific queue is at least used for buffering the delay-sensitive messages to be sent; The incoming rate of messages in the specific queue is determined, and the outgoing rate of messages in the specific queue is determined; and based on the determined outgoing rate, the specific queue is shaped to send out the first message; wherein,
所述第一网络节点为网络转发节点。The first network node is a network forwarding node.
本申请实施例还提供了一种网络节点,包括:第一通信接口及第一处理器;其中,The embodiment of the present application also provides a network node, including: a first communication interface and a first processor; wherein,
所述第一通信接口,配置为接收第一报文;The first communication interface is configured to receive a first message;
所述第一处理器,配置为从第一报文中获取第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;在获取到所述第一标识的情况下,将所述第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;以及利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;并基于确定的出速率,对所述特定队列进行整形,并通过所述第一通信接口发出所述第一报文;其中,The first processor is configured to obtain a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first newspaper; when the first identifier is obtained , Setting the first message in a specific queue; the specific queue is at least used for buffering delay-sensitive messages to be sent; and using the incoming rate of the specific queue messages to determine the Out rate; and based on the determined out rate, the specific queue is shaped, and the first message is sent through the first communication interface; wherein,
所述网络节点为网络转发节点。The network node is a network forwarding node.
本申请实施例还提供了一种网络节点,包括:第一处理器和配置为存储能够在处理器上运行的计算机程序的第一存储器,An embodiment of the present application also provides a network node, including: a first processor and a first memory configured to store a computer program that can run on the processor,
其中,所述第一处理器配置为运行所述计算机程序时,执行上述第一网络节点侧任一方法的步骤。Wherein, the first processor is configured to execute the steps of any method on the side of the first network node when running the computer program.
本申请实施例还提供了一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述第一网络节点侧任一方法的步骤。The embodiment of the present application also provides a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of any method on the first network node side described above are implemented.
本申请实施例提供的报文传输方法、装置、网络节点及存储介质,第一网络节点接收第一报文;从第一报文中获取第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;在获取到所述第一标识的情况下,将所述第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;基于确定的出速率,对所述特定队列进进行整形,发出所述第一报文;其中,所述第一网络节点为网络转发节点,在网络转发节点上,不识别每个低时延业务流,仅仅根据包的特征,整体上识别是低时延需求的流,并将低时延需求的报文设置在特定队列中进行整形,尽量保证特定队列中具有一定的队列深度,而不是采用BE的尽快转发机制,如此,网络转发节点能够保证低时延流量按需有序发送,且网络转发节点的处理尽量减少了网络中报文的微突发带来的丢包和缓存时延,能够满足业务的时延需求。According to the message transmission method, device, network node, and storage medium provided in the embodiments of the present application, the first network node receives the first message; obtains the first identifier from the first message; the first identifier represents the first Sending stationery to delay-sensitive requirements; in the case of obtaining the first identifier, set the first message in a specific queue; the specific queue is at least used for buffering delay-sensitive messages to be sent; Using the incoming rate of messages in the specific queue to determine the outgoing rate of messages in the specific queue; based on the determined outgoing rate, shaping the specific queue to send out the first message; wherein, the first message A network node is a network forwarding node. On the network forwarding node, each low-latency service flow is not recognized. It is only identified as a flow with low-latency requirements based on the characteristics of the packet as a whole, and packets with low-latency requirements are Set in a specific queue for shaping, try to ensure that there is a certain queue depth in the specific queue, instead of using the BE forwarding mechanism as soon as possible. In this way, the network forwarding node can ensure that low-latency traffic is sent in an orderly manner on demand, and the network forwarding node's The processing minimizes the packet loss and buffering delay caused by the micro-burst of packets in the network, and can meet the delay requirements of the business.
附图说明Description of the drawings
图1为一种确定性网络架构示意图;Figure 1 is a schematic diagram of a deterministic network architecture;
图2为一种IP设备的微突发示意图;Figure 2 is a schematic diagram of a micro burst of an IP device;
图3为一种IP网络架构示意图;Figure 3 is a schematic diagram of an IP network architecture;
图4本申请实施例第二网络节点侧的报文传输的方法流程示意图;FIG. 4 is a schematic flowchart of a method for message transmission on the side of a second network node in an embodiment of the present application;
图5a和图5b为本申请实施例邻接SID格式示意图;5a and 5b are schematic diagrams of adjacent SID formats according to an embodiment of this application;
图6为本申请实施例前缀SID格式示意图;Fig. 6 is a schematic diagram of a prefix SID format according to an embodiment of the application;
图7为申请实施例第一网络节点侧的报文传输的方法流程示意图;FIG. 7 is a schematic flowchart of a method for message transmission on the side of a first network node in an application embodiment;
图8为申请实施例报文传输的方法流程示图;FIG. 8 is a flowchart of a method for message transmission according to an application embodiment;
图9为本申请应用实施例网络转发节点的入接口与出接口对应关系示意图;FIG. 9 is a schematic diagram of the corresponding relationship between the inbound interface and the outbound interface of a network forwarding node according to an application embodiment of this application;
图10为本申请实施例物理接口与虚拟接口关系示意图;FIG. 10 is a schematic diagram of the relationship between a physical interface and a virtual interface according to an embodiment of the application;
图11为本申请实施例一种报文传输装置结构示意图;FIG. 11 is a schematic structural diagram of a message transmission device according to an embodiment of the application;
图12为本申请实施例另一种报文传输装置结构示意图;FIG. 12 is a schematic structural diagram of another message transmission device according to an embodiment of the application;
图13为本申请实施例一种网络节点结构示意图;FIG. 13 is a schematic diagram of a network node structure according to an embodiment of this application;
图14为本申请实施例另一种网络节点结构示意图。FIG. 14 is a schematic diagram of another network node structure according to an embodiment of the application.
具体实施方式Detailed ways
下面结合附图及实施例对本申请再作进一步详细的描述。The application will be further described in detail below in conjunction with the drawings and embodiments.
时延敏感网络(TSN,Time Sensitive Networking)由用于音视频网络的音视频桥接(AVB,Ethernet Audio/Video Bridging)演进而来,是在电气和电子工程师协会(IEEE)定义的一个协议集,主要用于较小的专用网络(需求是10~100μs时延的以太网(Ethernet),典型的如车内的网络(可以理解为设置在一个车辆内的多个设备形成的网络)或者工业控制网络),也定义过用于较大的网络,比如,用于前传网络,主要思路是高优先级和包(应文表达为Packet)抢占。Time Sensitive Networking (TSN, Time Sensitive Networking) evolved from the audio and video bridging (AVB, Ethernet Audio/Video Bridging) used for audio and video networks. It is a protocol set defined by the Institute of Electrical and Electronics Engineers (IEEE). Mainly used for small dedicated networks (Ethernet with a delay of 10 to 100 μs), such as in-vehicle networks (which can be understood as a network formed by multiple devices installed in a vehicle) or industrial control Network), which has also been defined for larger networks, for example, for fronthaul networks. The main idea is high priority and packet preemption (expressed as Packet in the text).
TSN网络的调度机制主要包含以下几方面:The scheduling mechanism of TSN network mainly includes the following aspects:
1、基于信用值的整形器(CBS,credit based shaper):针对一个队列的调度机制;队列会按照约定的速率得到信用值,信誉值大于或等于0就可以发送数据包,发送数据包时,信用值会降低;该整形器的效果是:对队列的包进行整形,按照约定的速率逐个发出(也被称为pacing);整形后,在发送端口一般这种流量会跟BE流量并存,且这种流量需要一个较高的优先级,来保证不受BE流量的干扰,以保持前面队列整形的效果;1. Credit-based shaper (CBS, credit-based sharper): A scheduling mechanism for a queue; the queue will get the credit value at the agreed rate, and the credit value is greater than or equal to 0 to send data packets. When sending data packets, The credit value will be reduced; the effect of this shaper is: to shape the packets in the queue and send them one by one at the agreed rate (also called pacing); after shaping, this kind of traffic will generally coexist with BE traffic on the sending port, and This kind of traffic requires a higher priority to ensure that it is not interfered by BE traffic, so as to maintain the effect of the previous queue shaping;
2、时间敏感队列(TSQ,Time Sensitive queues):采用门控机制,设备上所有的队列(queue)使用一个循环调度机制(依据一个以ns为粒度的门控表格,控制队列的打开关闭),设备间的同步依赖于精确时间协议(PTP),通过路径上各个设备协同,精确地打开关闭门控,支持TSN流量 的最快转发;2. Time Sensitive Queues (TSQ, Time Sensitive Queues): Using a gating mechanism, all queues on the device use a circular scheduling mechanism (based on a gating table with a granularity of ns to control the opening and closing of the queue), The synchronization between devices relies on the Precision Time Protocol (PTP), through the coordination of each device on the path, the gate can be opened and closed accurately, and the fastest forwarding of TSN traffic is supported;
3、传输抢占机制(Transmission preemption):包抢占策略,支持高优先级包打断正在发送的低优先级包;3. Transmission preemption: packet preemption strategy, supports high-priority packets to interrupt the low-priority packets being sent;
4、入口调度和循环队列转发(CQF,Input scheduling and Cyclic Queuing and Forwarding)机制:在入口正确的时间窗到达,然后保证在确定的时间窗从出口发出,使用周期循环的几个队列作为发送队列;4. Ingress scheduling and cyclic queue forwarding (CQF, Input scheduling and Cyclic Queuing and Forwarding) mechanism: arrive at the correct time window at the entrance, and then ensure that it is sent from the exit at a certain time window, using several queues that cycle periodically as the sending queue ;
5、基于紧急度的调度(UBS,Urgency Based Scheduling),也称为异步流量整形(ATS,Asynchronous Traffic Shaping):目的是比CQF提供更好的总体时延,成本低,并且不需要设备之间的同步;其机制目前支持两种整形的方式,分别是长度速率比例(LRQ,Length Rate Quotient)、令牌桶仿真(TBE,Token Bucket Emulation)方式,这两种方式的调度效果上跟CBS类似,都适用于队列流量的pacing;5. Urgency-based scheduling (UBS, Urgency Based Scheduling), also known as Asynchronous Traffic Shaping (ATS, Asynchronous Traffic Shaping): The purpose is to provide better overall delay than CQF, lower cost, and does not require inter-device The mechanism currently supports two shaping methods, namely, Length Rate Ratio (LRQ, Length Rate Quotient) and Token Bucket Emulation (TBE, Token Bucket Emulation). The scheduling effects of these two methods are similar to CBS. , Are applicable to the pacing of queue traffic;
6、分组包复制和消除(PRE,Packet Replication and Elimination):多发选收,也可以称为帧复制和消除(FRER,Frame Replication and Elimination Reliability)。6. Packet Replication and Elimination (PRE, Packet Replication and Elimination): Multiple transmission and selective reception, also called Frame Replication and Elimination (FRER, Frame Replication and Elimination).
确定性需求绝不限于局部二层网络,来自不同机构的二十多位作者联合撰写了确定性网络用例(Use Case)文稿,阐述了在九大产业里的需求,包括:专业音频和视频(pro audio&video)、电力公司(electrical utilities)、楼宇自动化系统(building automation systems)、工业无线(wireless for industrial)、蜂窝无线电(cellular radio)、工业机器对机器通信(industrial M2M)、矿业(mining industry)、私有区块链和网络切片(private blockchain and network slicing)等;同时,需求场景规模可能会很大,包括全国性网络,大量的设备和超远的距离。基于此,产生了确定性网络(DetNet,Deterministic Networking),DetNet是在国际互联网工程任务组(IETF)定义的确定性网络架构,主要关注三层网络的确定性,将TSN的能力从二层扩展到三层,如图1所示。The deterministic demand is by no means limited to the local two-layer network. More than 20 authors from different organizations have jointly written a Use Case manuscript, which elaborates the needs in nine major industries, including: professional audio and video ( pro audio&video), electrical utilities, building automation systems, wireless for industrial, cellular radio, industrial machine-to-machine communication (industrial M2M), mining (mining industry) , Private blockchain and network slicing (private blockchain and network slicing); at the same time, the scale of demand scenarios may be very large, including a national network, a large number of equipment and ultra-long distances. Based on this, Deterministic Networking (DetNet, Deterministic Networking) was created. DetNet is a deterministic network architecture defined by the Internet Engineering Task Force (IETF). It focuses on the determinism of the three-layer network and expands the capabilities of TSN from the second layer. Go to the third floor, as shown in Figure 1.
TSN的特点是网络规模小,流量模型相对简单,支持每流的识别,或者网络同步;因此,TSN的相关机制主要是针对小规模的网络来开发的,在大规模的网络中,很难直接应用在IP转发设备上;而且,TSN的相关机制对于确定性的流量的处理机制相对复杂。TSN is characterized by its small network scale and relatively simple traffic model, which supports the identification of each stream, or network synchronization; therefore, the related mechanism of TSN is mainly developed for small-scale networks. In large-scale networks, it is difficult to directly It is applied to IP forwarding equipment; moreover, the processing mechanism of deterministic traffic is relatively complicated by the relevant mechanism of TSN.
另一方面,目前,IP网络的调度机制属于加权轮询(WRR)类的,在该调度机制中,报文转发的核心思想是尽量快速地把数据包发出去,核心的指标是线速及吞吐率。这里,所述线速指的是:对于某类报文,例如128字节(bytes)的一系列报文,设备转发时能做到端口速率的进入和端口速率的发出。在目前的IP转发机制下,如图2所示,即使是较为轻载的网络,目前的IP设备(即网络节点)的调度,也会产生一定程度的微突发,即调度前是pacing好的数据,调度后报文会聚集起来。因此,在设计理念上,IP网络的调度机制与TSN和DetNet的一些要求(必须保证100%的可靠性 和确定性时延)是不匹配的,从而在一些细节上很难直接应用TSN的调度机制。On the other hand, the current scheduling mechanism of IP networks belongs to the weighted round-robin (WRR) type. In this scheduling mechanism, the core idea of packet forwarding is to send data packets as quickly as possible. The core indicators are line speed and Throughput rate. Here, the line rate refers to: for a certain type of message, such as a series of 128 bytes (bytes), the device can achieve port rate entry and port rate transmission when forwarding. Under the current IP forwarding mechanism, as shown in Figure 2, even in a relatively lightly loaded network, the current scheduling of IP devices (ie network nodes) will produce a certain degree of microbursts, that is, pacing is good before scheduling. After scheduling, the messages will be gathered together. Therefore, in the design concept, the scheduling mechanism of IP network does not match some requirements of TSN and DetNet (100% reliability and deterministic delay must be guaranteed), so it is difficult to directly apply TSN scheduling in some details. mechanism.
从上面的描述可以看出,作为旧机制的IP转发的特点是统计复用,低成本,大吞吐,设计理念符合IP的突发的流量模型;作为新机制的TSN的调度特点是确定性,对于特定的流量实现类似恒定比特率(CBR)的调度效果。As can be seen from the above description, the characteristics of IP forwarding as the old mechanism are statistical multiplexing, low cost, large throughput, and the design concept conforms to the burst traffic model of IP; the scheduling characteristic of TSN as the new mechanism is determinism. For specific traffic, it achieves a scheduling effect similar to constant bit rate (CBR).
因此,要在IP网络上支持确定性传输,就要避免以下操作:Therefore, to support deterministic transmission on IP networks, the following operations must be avoided:
第一,避免要求网络转发节点(即对报文进行转发的网络节点,也可以称为P节点,也可以称为P路由器,或者中间路由器,如图3所示),比如骨干网络中的运营商节点,逐流地去识别业务。First, avoid requiring network forwarding nodes (that is, network nodes that forward packets, which can also be called P nodes, or P routers, or intermediate routers, as shown in Figure 3), such as operations in backbone networks The quotient node, to identify the business flow by flow.
具体地,不能让网络转发节点(一般来说,转发压力较大)做每个流(Flow)的解析,这是因为:网络转发节点一般主要负责按照包做转发,不适合支持太多的流识别的工作。Specifically, the network forwarding node (generally, the forwarding pressure is high) cannot be used to analyze each flow (Flow). This is because: the network forwarding node is generally responsible for forwarding according to the packet, which is not suitable for supporting too many flows. Identification work.
第二,避免要求网络转发节点上感知/维护太多的状态。Second, avoid requiring too much state to be sensed/maintained on the network forwarding node.
具体地,网络转发节点上的流数量较多,如果单个流有变化,不建议在控制面不断更改网络转发节点的带宽预留等信息,这也是遵循了目前的IP转发路由器设计的理念:在IP网络中,存在很多的突发性的流量,这时建议即使某个/某些流接入需求有变化,网络转发节点也不必全部感知。Specifically, the number of flows on the network forwarding node is large. If a single flow changes, it is not recommended to constantly change the bandwidth reservation of the network forwarding node on the control plane. This also follows the current design concept of IP forwarding routers: In the IP network, there are many bursts of traffic. At this time, it is recommended that even if a certain/some flow access requirements change, the network forwarding nodes do not have to perceive them all.
基于此,在本申请的各种实施例中,在网络转发节点上,不识别每个低时延的流,仅仅根据包的特征,整体上识别是低时延需求的流,保证低时延流量按需发送。Based on this, in various embodiments of the present application, on the network forwarding node, each low-latency flow is not identified, and only the flow with low-latency requirements is identified as a whole based on the characteristics of the packet, so as to ensure low-latency Traffic is sent on demand.
本申请实施例提供一种报文传输方法,应用于第二网络节点,如图4所示,该方法包括:The embodiment of the present application provides a message transmission method, which is applied to a second network node. As shown in FIG. 4, the method includes:
步骤401:所述第二网络节点获取第一报文;并确定第一报文对应的业务为时延敏感业务;Step 401: The second network node obtains the first message; and determines that the service corresponding to the first message is a delay-sensitive service;
步骤402:所述第二网络节点为所述第一报文设置第一标识;Step 402: The second network node sets a first identifier for the first message;
这里,所述第一标识表征所述第一报文具有时延敏感的需求。Here, the first identifier represents the time delay sensitive demand of the first newspaper.
步骤403:所述第二网络节点将设置有第一标识的第一报文设置在特定的队列中进行整形,之后发出设置有第一标识的第一报文。Step 403: The second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier.
其中,所述第二网络节点为网络边缘节点,可以称为PE节点、PE路由器等,比如骨干网络中的运营商边缘节点。Wherein, the second network node is a network edge node, which may be referred to as a PE node, a PE router, etc., such as an operator edge node in a backbone network.
实际应用时,在步骤401中,当所述第一报文从某个特定的虚拟局域网(VLAN)接入或从某个特定的接口接入时,或者,当所述第一报文从某个特定的接口或者VLAN接入,并且有约定的优先级,则所述第二网络节点认为所述第一报文对应的业务为时延敏感业务,即低时延业务。当然,还可以采用其他方式来识别所述第一报文对应的业务为时延敏感业务,本申请实施例对此不作限定。In actual application, in step 401, when the first packet is accessed from a specific virtual local area network (VLAN) or from a specific interface, or when the first packet is accessed from a certain If a specific interface or VLAN is accessed and has an agreed priority, the second network node considers the service corresponding to the first message to be a delay-sensitive service, that is, a low-latency service. Of course, other methods may also be used to identify that the service corresponding to the first message is a delay-sensitive service, which is not limited in the embodiment of the present application.
所述第一标识表征所述第一报文具有时延敏感的需求,也可以理解为 所述第一标识表征所述第一报文的类型为时延敏感类型。The first identifier characterizes the delay-sensitive requirement of the first newspaper and stationery, and can also be understood as the first identifier characterizing that the type of the first message is a delay-sensitive type.
实际应用时,所述第一标识可以是一个优先级标识,可以标识低时延流量占据一个较高的独占优先级,例如6。In practical applications, the first identifier may be a priority identifier, which may identify that low-latency traffic occupies a higher exclusive priority, such as 6.
基于此,在一实施例中,所述第一标识表征所述第一报文具有独占发送优先级。Based on this, in an embodiment, the first identifier indicates that the first packet has an exclusive sending priority.
实际应用时,所述第一标识还可以是一个分段路由(SR)的邻接SID(英文可以表达为adj SID,邻接SID的类型在SRv6中为End.X函数)。In practical applications, the first identifier may also be an adjacent SID of a segment routing (SR) (it can be expressed as adj SID in English, and the type of adjacent SID is the End.X function in SRv6).
图5a和图5b示出了两种邻接SID的格式,实际应用时,可以在算法(Algorithm)的部分,可以指明一个特定的algorithm,例如200,表征用于低时延流量传输。Figures 5a and 5b show the formats of two adjacent SIDs. In practical applications, you can specify a specific algorithm in the algorithm section, such as 200, which is used for low-latency traffic transmission.
实际应用时,所述第一标识还可以是一个SR的前缀SID(英文可以表达为Prefix SID,前缀SID的类型在SRv6中为End函数)。其中,图5a所示格式的前缀SID适用于点对点(P2P)连接场景(Type:43),图5b所示格式的前缀SID适用于局域网(LAN)场景Type:44)。In practical applications, the first identifier may also be a prefix SID of an SR (English may be expressed as Prefix SID, and the type of the prefix SID is End function in SRv6). Among them, the prefix SID in the format shown in FIG. 5a is suitable for a point-to-point (P2P) connection scenario (Type: 43), and the prefix SID in the format shown in FIG. 5b is suitable for a local area network (LAN) scenario (Type: 44).
图6示出了一种前缀SID的格式,实际应用时,该End函数发布时是中间系统到中间系统(ISIS)中,位置标签长度值(Locator TLV)的一个子TLV,在Locator TLV Algorithm的部分,可以指明一个特定的algorithm,例如200,表征用于低时延流量传输。Figure 6 shows the format of a prefix SID. In actual application, when the End function is released, it is a sub-TLV of the Locator TLV in the Intermediate System to Intermediate System (ISIS). Part, you can specify a specific algorithm, such as 200, which is used for low-latency traffic transmission.
实际应用时,在步骤403中,所述第二网络节点可以为每一个接入的时延敏感业务做单独的整形,以让报文按照一定的速率(该业务流要求的速率)逐个发出;其中,所述第二网络节点可以通过一定的实现方式得到速率,例如通过手工/网管配置方式(即静态配置的方式),或者控制面传递方式(也可以理解为控制面通告的方式),或者数据面传递方式(也可以称为数据面通告的方式)。In actual application, in step 403, the second network node may perform separate shaping for each access delay-sensitive service, so that the packets are sent out one by one at a certain rate (the rate required by the service flow); Wherein, the second network node can obtain the rate through a certain implementation method, for example, through manual/network management configuration (ie static configuration), or control plane transfer mode (which can also be understood as control plane notification), or Data plane transfer method (also called data plane notification method).
所述第二网络节点整形的方法可以采用CBS、或者ATS的LRQ、TBE等,本申请实施例对此不作限定。The method for shaping the second network node may use CBS, or LRQ, TBE of ATS, etc., which is not limited in the embodiment of the present application.
对应地,本申请实施例还提供了一种报文传输方法,应用于第一网络节点,如图7所示,包括:Correspondingly, an embodiment of the present application also provides a message transmission method, which is applied to a first network node, as shown in FIG. 7, including:
步骤701:接收第一报文;Step 701: Receive the first message;
步骤702:从第一报文中获取第一标识;Step 702: Obtain the first identifier from the first message;
这里,所述第一标识表征所述第一报文具有时延敏感的需求。Here, the first identifier represents the time delay sensitive demand of the first newspaper.
步骤703:在获取到所述第一标识的情况下,将所述第一报文设置在特定队列中;Step 703: When the first identifier is obtained, set the first message in a specific queue;
这里,所述特定队列至少用于缓存(也可以理解为放置)待发送时延敏感报文。Here, the specific queue is at least used for buffering (also can be understood as placing) delay-sensitive messages to be sent.
步骤704:利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;Step 704: Use the incoming rate of packets in the specific queue to determine the outgoing rate of packets in the specific queue;
步骤705:基于确定的出速率,对所述特定队列进行整形,之后发出所 述第一报文。Step 705: Shape the specific queue based on the determined output rate, and then send the first message.
其中,所述第一网络节点为网络转发节点,可以称为P节点、P路由器等,比如骨干网络中的运营商节点。Wherein, the first network node is a network forwarding node, which may be called a P node, a P router, etc., such as an operator node in a backbone network.
实际应用时,所述第一网络节点从上一跳节点接收第一报文,所述上一跳节点可以是第二网络节点,也可以是一个网络转发节点。In an actual application, the first network node receives the first message from a previous hop node, and the previous hop node may be a second network node or a network forwarding node.
实际应用时,所述第一标识可以是一个优先级标识时,针对每个出口的时延敏感报文,所述第一网络节点配置专门的队列,将所有的时延敏感报文统一进行整形后发出,即所有拥有该独占优先级的报文都会汇聚到这个队列中。In practical applications, when the first identifier may be a priority identifier, for each egress delay-sensitive message, the first network node configures a special queue to uniformly shape all delay-sensitive messages Send out later, that is, all packets with the exclusive priority will be aggregated into this queue.
当所述第一标识是一个SR的邻接SID时,第一网络节点识别当前SID是自己发布的,并且将该报文转发到预先配置好的特定队列中,将所有的时延敏感报文统一进行整形后发出,即实际应用时将所有当前SID为该邻接SID的报文都会汇聚到这个队列中。When the first identifier is an adjacent SID of an SR, the first network node recognizes that the current SID is issued by itself, and forwards the message to a pre-configured specific queue to unify all delay-sensitive messages Send after shaping, that is, in actual application, all the packets whose current SID is the adjacent SID will be aggregated into this queue.
基于此,在一实施例中,从所述第一报文中获取SID list;SID list包含流量工程路径对应的多个SID;Based on this, in an embodiment, the SID list is obtained from the first message; the SID list includes multiple SIDs corresponding to the traffic engineering path;
在确定当前SID指示所述第一报文对应的下一跳网络节点上的特定队列的情况下,将所述第一报文设置在所述特定队列中。When it is determined that the current SID indicates a specific queue on the next-hop network node corresponding to the first message, the first message is set in the specific queue.
也就是说,当确定当前SID指示所述第一报文对应的下一跳网络节点上的特定队列时,就确定获取到所述第一标识。That is, when it is determined that the current SID indicates a specific queue on the next-hop network node corresponding to the first packet, it is determined that the first identifier is acquired.
这里,可以理解的是:对于第二网络节点来说,所述当前SID是指:接收的所述第一报文中的目的地址(DA)所对应的SID;相应地,所述第一报文对应的下一跳网络节点是指:所述第一报文中的DA对应的网络节点,即是指:接收所述第一报文的第二网络节点。Here, it can be understood that: for the second network node, the current SID refers to: the SID corresponding to the destination address (DA) in the received first message; accordingly, the first message The next-hop network node corresponding to the text refers to the network node corresponding to the DA in the first packet, that is, the second network node that receives the first packet.
当所述第一标识是一个SR的前缀SID时,第二网络节点识别该SID不是自己发布的,按照该SID查找路由转发表,出接口为预先配置好的特定队列中,从而将报文转发到特定的队列中,在这个队列中,会将所有的时延敏感报文统一进行整形后发出,即实际应用时路由表中所有出接口为该特定队列的报文都会汇聚到这个队列中。When the first identifier is a prefix SID of an SR, the second network node recognizes that the SID is not issued by itself, searches the routing and forwarding table according to the SID, and the outbound interface is in a pre-configured specific queue, thereby forwarding the message To a specific queue, in this queue, all delay-sensitive packets will be uniformly shaped and sent out. That is, in actual applications, all packets whose outbound interface is the specific queue in the routing table will be aggregated into this queue.
基于此,在一实施例中,从所述第一报文中获取前缀SID;Based on this, in an embodiment, the prefix SID is obtained from the first message;
在路由转发表中查找与获取的前缀SID对应的下一跳和出接口;Look up the next hop and outbound interface corresponding to the obtained prefix SID in the routing and forwarding table;
在查找到的出接口对应所述特定队列的情况下,将所述第一报文设置在所述特定队列中。In a case where the found outgoing interface corresponds to the specific queue, the first packet is set in the specific queue.
也就是说,当确定查找到的出接口对应所述特定队列时,确定获取到所述第一标识。That is, when it is determined that the found outbound interface corresponds to the specific queue, it is determined that the first identifier is acquired.
实际应用时,报文的输出速度按照“目的虚接口的包的入速率”来进行限制,这样,对于整个IP设备(即P节点)来说,每个端口按照收到的低时延流量的报文的速度(即虚接口的入速率),发送这些低时延流量的报文,同时保证一定的缓存(buffer)深度(也可以理解为有一定的报文缓存 大小),而不是按照之前的尽快转发的机制处理(即BE转发机制)。In actual application, the output speed of the message is limited according to the "incoming rate of the packet of the destination virtual interface". In this way, for the entire IP device (ie P node), each port is based on the received low-latency traffic The speed of the message (that is, the incoming rate of the virtual interface), send these low-latency traffic messages, while ensuring a certain buffer (buffer) depth (can also be understood as a certain message buffer size), rather than according to the previous The mechanism of forwarding as soon as possible (ie BE forwarding mechanism).
基于此,在一实施例中,步骤705的具体实现可以包括:Based on this, in an embodiment, the specific implementation of step 705 may include:
利用所述入速率,并结合队列深度,确定所述队列报文的出速率。Using the ingress rate in combination with the queue depth to determine the outgoing rate of packets in the queue.
其中,所述队列深度可以理解为队列在报文的个数。Wherein, the queue depth can be understood as the number of packets in the queue.
在一实施例中,所述利用所述入速率,并结合队列深度,确定所述特定队列报文的出速率,包括:In an embodiment, the using the inbound rate in combination with the queue depth to determine the outbound rate of packets in the specific queue includes:
当队列深度小于阈值时,确定所述出速率为第一速率;所述第一速率小于所述入速率,且所述入速率与所述第一速率的差值小于第一值;When the queue depth is less than the threshold, determine that the out rate is a first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than the first value;
其中,当队列深度等于阈值时,确定所述出速率为第二速率;所述第二速率等于所述入速率。Wherein, when the queue depth is equal to the threshold, it is determined that the out rate is the second rate; the second rate is equal to the in rate.
当所述入速率为零时,确定所述出速率为第三速率;所述第三速率为预设速率、或为最后记录的出速率。When the incoming rate is zero, it is determined that the outgoing rate is the third rate; the third rate is the preset rate or the last recorded out rate.
这里,实际应用时,可以设置一个用于确定报文速率的统计周期,比如,20μs,统计20μs内入所述特定队列的报文的个数,据此,确定所述入速率。Here, in actual application, a statistical period for determining the message rate can be set, for example, 20 μs, counting the number of messages entering the specific queue within 20 μs, and determining the incoming rate based on this.
实际应用时,所述第一值可以根据需要来设置,只要使得入速率略大于出速率即可。In practical applications, the first value can be set according to needs, as long as the incoming rate is slightly greater than the outgoing rate.
当队列深度等于阈值,且所述入速率过大(比如超过了入速率阈值(可以根据需要设置))时,所述出速率可以是第四速率,所述第四速率是设置的出速率阈值(可以根据需要设置,且小于所述入速率)。When the queue depth is equal to the threshold, and the inbound rate is too large (for example, exceeds the inbound rate threshold (which can be set as required)), the outbound rate may be the fourth rate, and the fourth rate is the set outbound rate threshold (It can be set as required, and is less than the input rate).
实际应用时上,所述预设速率可以根据需要设置。In practical applications, the preset rate can be set as required.
所述第一网络节点整形的方式可以采用CBS、或者ATS的LRQ、TBE等,本申请实施例对此不作限定。The shaping method of the first network node may adopt CBS, or LRQ, TBE of ATS, etc., which is not limited in the embodiment of the present application.
本申请实施例还提供了一种报文传输方法,如图8所示,该方法包括:The embodiment of the present application also provides a message transmission method. As shown in FIG. 8, the method includes:
步骤801:第二网络节点获取第一报文;并确定第一报文对应的业务为时延敏感业务;Step 801: The second network node obtains the first message; and determines that the service corresponding to the first message is a delay-sensitive service;
步骤802:所述第二网络节点为所述第一报文设置第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;Step 802: The second network node sets a first identifier for the first message; the first identifier represents the delay-sensitive demand of the first message;
步骤803:所述第二网络节点将设置有第一标识的第一报文设置在特定的队列中进行整形,之后发出设置有第一标识的第一报文;Step 803: The second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier;
步骤804:第一网络节点接收第一报文,并从接收的第一报文中获取第一标识,在获取到第一标识的情况下,将接收的第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;Step 804: The first network node receives the first message, and obtains the first identifier from the received first message, and when the first identifier is obtained, sets the received first message in a specific queue; The specific queue is at least used for buffering delay-sensitive messages to be sent;
步骤805:所述第一网络节点利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;并基于确定的出速率,对所述特定队列进行整形,之后发出接收的第一报文。Step 805: The first network node uses the incoming rate of the specific queue message to determine the outgoing rate of the specific queue message; and based on the determined outgoing rate, the specific queue is shaped, and then the received The first message.
其中,所述第二网络节点为网络边缘节点;所述第一网络节点为网络转发节点。Wherein, the second network node is a network edge node; the first network node is a network forwarding node.
这里,需要说明的是:第一网络节点和第二网络节点的具体处理过程已在上文详述,这里不再赘述。Here, it needs to be explained that the specific processing procedures of the first network node and the second network node have been described in detail above, and will not be repeated here.
本申请实施例提供的报文传输方法,第一网络节点接收第一报文;从第一报文中获取第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;在获取到所述第一标识的情况下,将所述第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;基于确定的出速率,对所述特定队列进进行整形,发出所述第一报文;其中,所述第一网络节点为网络转发节点;在网络转发节点上,不识别每个低时延业务流,仅仅根据包的特征,整体上识别是低时延需求的流,并将低时延需求的报文设置在特定队列中进行整形,尽量保证特定队列中具有一定的队列深度,而不是采用BE的尽快转发机制,如此,网络转发节点能够保证低时延流量按需有序发送,且网络转发节点的处理尽量减少了网络中报文的微突发带来的丢包和缓存时延,能够满足业务的时延需求。In the message transmission method provided by the embodiments of the present application, a first network node receives a first message; obtains a first identifier from the first message; and the first identifier represents the delay-sensitive requirement of the first message When the first identifier is obtained, the first message is set in a specific queue; the specific queue is at least used for buffering delay-sensitive messages to be sent; using the specific queue message Incoming rate, determining the outgoing rate of messages in the specific queue; based on the determined outgoing rate, shaping the specific queue, and sending out the first message; wherein, the first network node is a network forwarding node; On the network forwarding node, each low-latency service flow is not identified, and only the flow with low-latency requirements is identified as a whole based on the characteristics of the packet, and the packets with low-latency requirements are set in a specific queue for shaping. Try to ensure that there is a certain queue depth in a specific queue instead of using BE's forwarding mechanism as soon as possible. In this way, the network forwarding node can ensure that low-latency traffic is sent in an orderly manner, and the processing of the network forwarding node minimizes the number of packets in the network The packet loss and buffer delay caused by the micro-burst can meet the delay requirements of the business.
下面结合应用实施例对本申请再作进一步详细的描述。The application will be further described in detail below in conjunction with application examples.
应用实施例一Application Example One
在本应用实施例中,低时延业务的发送使用独占优先级。In this application embodiment, the transmission of low-latency services uses exclusive priority.
在普通的IP转发的场景中,低时延流量与BE流量使用相同的目的IP,此时需要区分低时延流量,这些低时延流量需要独享的优先级。In a common IP forwarding scenario, low-latency traffic and BE traffic use the same destination IP. At this time, low-latency traffic needs to be distinguished, and these low-latency traffic need exclusive priority.
结合图9,本应用实施例报文的传输流程包括:With reference to Figure 9, the message transmission process of this application embodiment includes:
步骤1:网络边缘节点(如PE1节点)按照相关技术,对分组报文Packet1进行流量的识别,并对分组报文Packet1所在队列进行整形,并且确认打上正确的优先级,独占优先级,例如Priority 6;Step 1: The network edge node (such as the PE1 node) identifies the flow of the packet packet Packet1 according to the related technology, and shapes the queue where the packet packet Packet1 is located, and confirms that the correct priority is assigned, the exclusive priority, such as Priority 6;
这里,进行流量识别后识别出分组报文Packet1为低时延流量。Here, after the flow identification is performed, it is identified that the packet packet Packet1 is a low-latency flow.
具体识别和整形过程需要按照预先配置的识别和整形方式进行。可通过CBS或ATS进行整形。The specific identification and shaping process needs to be carried out in accordance with the pre-configured identification and shaping methods. Shaping can be done by CBS or ATS.
本步骤执行完成后,低时延流量和BE流量一起进入网络中。After this step is completed, low-latency traffic and BE traffic enter the network together.
步骤2:分组报文Packet1到达网络转发节点P1的入接口Iin1,网络转发节点P1按照分组报文Packet1的DA查找出接口为Iout3;Step 2: The packet packet Packet1 arrives at the inbound interface Iin1 of the network forwarding node P1, and the network forwarding node P1 searches for the outbound interface as Iout3 according to the DA of the packet packet Packet1;
这里,Iout3上配置了对应Priority 6的特定的队列,这个特定队列为低时延流量服务,这个特定队列是跟Priority 6关联的。Here, a specific queue corresponding to Priority 6 is configured on Iout3. This specific queue serves low-latency traffic, and this specific queue is associated with Priority 6.
分组报文Packet1的优先级是Priority 6,因此,出接口为Iout3上预先配置的为低时延流量服务的特定队列。The priority of the packet packet Packet1 is Priority 6. Therefore, the outbound interface is a specific queue pre-configured on Iout3 to serve low-latency traffic.
为了简化描述,假设网络转发节点P1有四个接口,一个板卡只有一个接口,当然,实际应用时,一个板卡多个接口的处理方式类似。In order to simplify the description, suppose that the network forwarding node P1 has four interfaces, and a board has only one interface. Of course, in actual applications, the processing methods for multiple interfaces of a board are similar.
步骤3:分组报文Packet1到达网络转发节点P1的出接口Iout3,针对Iout3,对于所有Priority 6的报文放入同一队列(即特定队列)(报文来源可能是入接口Iin1,Iin2,Iin4,即只要优先级是Priority 6的分组报文,均 都放置在该特定队列中),按照整体的入速率作为出速率进行整形(比如CBS、ATS等)发出。Step 3: The packet packet Packet1 arrives at the outbound interface Iout3 of the network forwarding node P1. For Iout3, all Priority 6 packets are placed in the same queue (ie a specific queue) (the source of the packet may be the inbound interface Iin1, Iin2, Iin4, That is, as long as the packets with priority 6 are all placed in the specific queue), they are shaped (such as CBS, ATS, etc.) according to the overall incoming rate as the outgoing rate.
具体地,初始时或队列深度较低时,可以适当按照入速率略大约出速率发送;当队列深度达到一定阈值时,按照入速率=出速率进行发送;当入速率=0时,出速率按照预设的固定速率或者最后记录的速率发出缓冲区的报文。Specifically, at the beginning or when the queue depth is low, it can be sent according to the incoming rate slightly at the outgoing rate; when the queue depth reaches a certain threshold, it will be sent according to the incoming rate = the outgoing rate; when the incoming rate = 0, the outgoing rate will be sent according to The preset fixed rate or the last recorded rate sends out the packets in the buffer.
其中,可以设定适当的统计报文速率的周期,例如20μs统计一次。Among them, an appropriate period of packet rate statistics can be set, for example, statistics are performed once every 20 μs.
当统计的入速率过大时,可以设定一定的阈值赋值给出速率,无需按照入速率转发,此时该特定队列深度会变大。一般来说,实际应用时,这个情况不会持续太久,这是因为:When the statistical inbound rate is too large, a certain threshold can be set to assign the rate, and there is no need to forward according to the inbound rate. At this time, the specific queue depth will become larger. Generally speaking, in actual application, this situation will not last too long. This is because:
第一,在网络入口做了限速;First, the speed limit is set at the network entrance;
第二,网络中的低时延的流量的微突发已经被减少了;Second, the micro-burst of low-latency traffic in the network has been reduced;
第三,这时的低时延流量占比在整个网络中并不高,主要还是BE流量。Third, the proportion of low-latency traffic at this time is not high in the entire network, mainly BE traffic.
也就是说,这里的“入速率=出速率”仅仅是一个理念,具体实现可能会根据网络和流量情况调节内部参数。That is to say, the "incoming rate = outgoing rate" here is just a concept, and the specific implementation may adjust internal parameters according to the network and traffic conditions.
针对Iout3,对于所有Priority 6的报文所在的特定队列,与其他队列(即BE流量的队列,例如Priority 0的流量是BE流量)按照相关机制进行调度,从Iout3发出,比如低时延流量的优先级较高,因此特定队列一有报文就会发送出去。For Iout3, for the specific queue where all Priority 6 packets are located, it is scheduled with other queues (that is, the queue of BE traffic, for example, the traffic of Priority 0 is BE traffic) according to the relevant mechanism, and sent from Iout3, such as low-latency traffic The priority is higher, so a message will be sent out as soon as there is a specific queue.
应用实施例二Application Example Two
在本应用实施例中,低时延业务使用专门的邻接SID。In this application embodiment, the low-latency service uses a dedicated adjacent SID.
在SR转发的场景中,低时延流量与BE流量可以使用不同的邻接SID,从而可以直接按照SID区分出低时延流量和BE流量,此时不需要通过设置独占优先级的方式,但是还是需要单独的队列,以及配置较高的优先级(保证优先转发)。In the SR forwarding scenario, different adjacent SIDs can be used for low-latency traffic and BE traffic, so that low-latency traffic and BE traffic can be distinguished directly according to SID. At this time, there is no need to set exclusive priority, but it is still A separate queue is required, and a higher priority is configured (to ensure priority forwarding).
结合图9,本应用实施例报文的传输流程包括:With reference to Figure 9, the message transmission process of this application embodiment includes:
步骤1:网络边缘节点(如PE1节点)按照相关技术,对分组报文Packet1进行流量的识别,并对分组报文Packet1所在的队列进行整形,并且确认打上正确的SID list(这些SID是的严格工程流量(TE)路径,并且指向每个节点上的特定的队列资源);Step 1: The network edge node (such as the PE1 node) identifies the flow of the packet packet Packet1 according to related technologies, and shapes the queue where the packet packet Packet1 is located, and confirms that the correct SID list is marked (these SIDs are strict Engineering traffic (TE) path, and point to a specific queue resource on each node);
这里,进行流量识别后识别出分组报文Packet1为低时延流量。Here, after the flow identification is performed, it is identified that the packet packet Packet1 is a low-latency flow.
具体识别和整形过程需要按照预先配置的识别和整形方式进行。可通过CBS或ATS进行整形。The specific identification and shaping process needs to be carried out in accordance with the pre-configured identification and shaping methods. Shaping can be done by CBS or ATS.
本步骤执行完成后,低时延流量和BE流量一起进入网络中。After this step is completed, low-latency traffic and BE traffic enter the network together.
步骤2:分组报文Packet1到达网络转发节点P1的入接口Iin1,当前SID为SID13,转发设备按照SID13查找出接口为Iout3的特定队列Queue3;Step 2: The packet packet Packet1 arrives at the inbound interface Iin1 of the network forwarding node P1, the current SID is SID13, and the forwarding device searches for the specific queue Queue3 whose outbound interface is Iout3 according to SID13;
为了简化描述,假设网络转发节点P1有四个接口,一个板卡只有一个接口,当然,实际应用时,一个板卡多个接口的处理方式类似。In order to simplify the description, suppose that the network forwarding node P1 has four interfaces, and a board has only one interface. Of course, in actual applications, the processing methods for multiple interfaces of a board are similar.
需要说明的是,设备入接口Iin2和Iin4所在的板卡匹配SID13的报文也会发送给Iout3(SID13是一个邻接SID,指向Iout3的特定队列Queue3,即低时延流量对应的报文都会被设置在特定队列Queue3中。It should be noted that the packets matching SID13 on the board where the device inbound interfaces Iin2 and Iin4 are located will also be sent to Iout3 (SID13 is an adjacent SID that points to the specific queue Queue3 of Iout3, that is, packets corresponding to low-latency traffic will be sent to Iout3. Set in the specific queue Queue3.
步骤3:分组报文Packet1到达网络转发节点P1的出接口Iout3,针对Iout3的特定队列Queue3,按照整体的队列入速率作为出速率进行整形(比如CBS、ATS等)发出。Step 3: The packet packet Packet1 arrives at the outbound interface Iout3 of the network forwarding node P1, and the specific queue Queue3 of Iout3 is shaped (such as CBS, ATS, etc.) according to the overall queue ingress rate as the outbound rate.
这里,按照整体的队列入速率作为出速率进行整形的具体处理过程与应用实施例一相同,这里不再赘述。Here, the specific processing procedure of shaping according to the overall queue ingress rate as the egress rate is the same as the application embodiment 1, and will not be repeated here.
针对Iout3,对于所有SID13的报文所在的特定队列Queue3,与其他的队列按照现有机制进行调度,从Iout3发出。For Iout3, the specific queue Queue3 where all SID13 messages are located is scheduled with other queues according to the existing mechanism and sent from Iout3.
实际应用时,这些专门的邻接SID可以通告到网络中,供网络进行网络编程,满足低时延业务的诉求。In practical applications, these special adjacent SIDs can be notified to the network for network programming to meet the demands of low-latency services.
应用实施例三Application Example Three
在本应用实施例中,低时延业务使用专门的前缀SID。In this application embodiment, the low-latency service uses a special prefix SID.
在SR转发的场景中,低时延流量与BE流量可以使用不同的前缀SID,从而可以直接按照SID区分出低时延流量和BE流量,此时不需要通过设置独占优先级的方式,但是还是需要单独的队列,以及配置较高的优先级(保证优先转发)。In the SR forwarding scenario, different prefix SIDs can be used for low-latency traffic and BE traffic, so that low-latency traffic and BE traffic can be distinguished directly according to SID. At this time, there is no need to set exclusive priority, but it is still A separate queue is required, and a higher priority is configured (to ensure priority forwarding).
结合图9,本应用实施例报文的传输流程包括:With reference to Figure 9, the message transmission process of this application embodiment includes:
步骤1:在网络边缘节点(如PE1节点)按照相关技术,对分组报文Packet1进行流量的识别,并对分组报文Packet1所在的队列进行整形,并且确认打上正确的前缀SID(在SRv6的场景中,该SID对应的位置(Locator)有相关的标识,例如使用Flex Algo ID,其转发表项的出接口指向每个节点上的特定的队列资源,对应的Locator需要事先发布出去,并且每个节点生成相关的转发表项,所述Locator为SRv6中SID的地址部分,用于路由到发布该SID的节点);Step 1: At the network edge node (such as the PE1 node), according to the related technology, identify the traffic of the packet packet 1, and shape the queue where the packet packet 1 is located, and confirm that the correct prefix SID is marked (in the SRv6 scenario In the SID, the location (Locator) corresponding to the SID has a related identifier. For example, Flex Algo ID is used. The outbound interface of its forwarding entry points to a specific queue resource on each node. The node generates a related forwarding entry, and the Locator is the address part of the SID in SRv6, which is used to route to the node that publishes the SID);
这里,进行流量识别后识别出分组报文Packet1为低时延流量。Here, after the flow identification is performed, it is identified that the packet packet Packet1 is a low-latency flow.
与应用实施例二的区别在于,邻接SID一般需要每跳指定,是一个标签栈(SR-TE),这里仅需要一个SID(SR-BE),是一个全局标签(例如对应某个PE节点)。The difference from Application Example 2 is that the adjacent SID generally needs to be specified per hop, and is a label stack (SR-TE), here only one SID (SR-BE) is required, which is a global label (for example, corresponding to a PE node) .
步骤2:分组报文Packet1到达网络转发节点P1的入接口Iin1,按照SID9查找转发表,出接口为Iout3的特定队列Queue3;Step 2: The packet packet Packet1 arrives at the inbound interface Iin1 of the network forwarding node P1, looks up the forwarding table according to SID9, and the outbound interface is the specific queue Queue3 of Iout3;
为了简化描述,假设网络转发节点P1有四个接口,一个板卡只有一个接口,当然,实际应用时,一个板卡多个接口的处理方式类似。In order to simplify the description, suppose that the network forwarding node P1 has four interfaces, and a board has only one interface. Of course, in actual applications, the processing methods for multiple interfaces of a board are similar.
需要说明的是:设备入接口Iin1所在的板卡的其他的前缀SID有可能出接口对应的队列也为Queue3,这个前缀SID跟SID9类似,代表了低时延的流量;It should be noted that the other prefix SID of the board where the inbound interface Iin1 of the device is located may also correspond to the queue of the outbound interface as Queue3. This prefix SID is similar to SID9 and represents low-latency traffic;
设备入接口Iin2和Iin4也有可能收到SID9的报文,或者其他的前缀 SID的报文,出接口对应的队列为Queue3。The inbound interfaces Iin2 and Iin4 of the device may also receive packets with SID9 or other prefixed SID packets. The queue corresponding to the outbound interface is Queue3.
也就是说,低时延流量对应的报文都会被设置在特定队列Queue3中。In other words, the packets corresponding to the low-latency traffic will be set in the specific queue Queue3.
步骤3:分组报文Packet1到达网络的转发节点P1的出接口Iout3,针对Iout3的特定Queue3,按照整体的队列入速率作为出速率进行整形(比如CBS、ATS等)发出。Step 3: The packet packet Packet1 arrives at the outbound interface Iout3 of the forwarding node P1 of the network. For the specific Queue3 of Iout3, the overall queue ingress rate is used as the outbound rate for shaping (such as CBS, ATS, etc.) to be sent.
这里,按照整体的队列入速率作为出速率进行整形的具体处理过程与应用实施例一相同,这里不再赘述。Here, the specific processing procedure of shaping according to the overall queue ingress rate as the egress rate is the same as the application embodiment 1, and will not be repeated here.
针对Iout3,将Queue3与其他的队列按照现有机制进行调度,从Iout3发出。For Iout3, Queue3 and other queues are scheduled according to the existing mechanism and sent from Iout3.
实际应用时,这些专门的前缀SID,可以通告到网络中,供网络进行网络编程,满足低时延业务的诉求。In actual applications, these special prefix SIDs can be advertised to the network for network programming to meet the demands of low-latency services.
另外,每个节点的每个端口可以配置特定的队列,成为这些专门的前缀SID的接口,支持低时延流量的转发。In addition, each port of each node can be configured with a specific queue, which becomes the interface of these special prefix SIDs, and supports the forwarding of low-latency traffic.
从上面的描述可以看出,申请实施例构建了一种在较大的IP三层网络中保证低时延传输的机制,具体包括:As can be seen from the above description, the application embodiment constructs a mechanism to ensure low-latency transmission in a larger IP three-layer network, which specifically includes:
网络边缘节点进行流量的识别和限速;Network edge nodes perform traffic identification and speed limit;
网络转发节点的每个端口,监控收到的低时延流量的速度,按照监控到的速度整形并转发低时延报文。Each port of the network forwarding node monitors the speed of the received low-latency traffic, shapes and forwards low-latency packets according to the monitored speed.
其中,如图10所示,在网络转发节点上,在每个物理出接口上划分特定的虚接口,对应一个针对所有的低时延流量提供的特定队列,并且按照汇聚后的低时延流量,进行整形。这样,在网络转发节点上,能够保证低时延流量按需发送,网络转发节点的处理尽量减少了报文的微突发,并且针对低时延流维持着一个合适的buffer深度。Among them, as shown in Figure 10, on the network forwarding node, each physical outbound interface is divided into a specific virtual interface, corresponding to a specific queue provided for all low-latency traffic, and according to the aggregated low-latency traffic , Perform plastic surgery. In this way, on the network forwarding node, low-latency traffic can be guaranteed to be sent on demand, and the processing of the network forwarding node minimizes the micro-burst of messages, and maintains a suitable buffer depth for low-latency flows.
其中,上述机制中利用标识机制识别低时延流量,具体包括:Among them, the identification mechanism used in the above mechanism to identify low-latency traffic includes:
第一种方式,使用独享的IP/多协议标签交换(MPLS)优先级来代表低时延流量,适用于IP/MPLS网络中。The first method uses exclusive IP/Multiprotocol Label Switching (MPLS) priority to represent low-latency traffic, which is suitable for IP/MPLS networks.
第二种方式,使用特定的SID来代表低时延流量,适用于SR网络中。特定的SID具体可以是SR-TE的邻接SID或者SR-BE的节点SID(即前缀SID)。The second method, using a specific SID to represent low-latency traffic, is suitable for SR networks. The specific SID may specifically be the adjacent SID of the SR-TE or the node SID of the SR-BE (that is, the prefix SID).
采用本申请实施例的方案,采用了较为简单的机制(1、低时延流量的标识和整体识别;2、低时延流量的特殊整形和转发),减少IP转发中的微突发的形成(报文不会由于尽快转发而形成突发),保证低时延流量在IP网络中的快速有序转发(报文尽量按照pacing的模式转发);当在较大的网络中,主要的时延是光纤传播时,只要网络需要尽量保证正常转发(即有序转发报文使得报文尽量不要汇聚在一起),不要形成微突发导致丢包,即可以提供较低时延的提供服务;无需引入过于复杂的流识别,或者过多的状态控制。Using the scheme of the embodiment of this application, a relatively simple mechanism (1. identification and overall identification of low-latency traffic; 2. special shaping and forwarding of low-latency traffic) is adopted to reduce the formation of micro-bursts in IP forwarding (Messages will not form bursts due to being forwarded as soon as possible) to ensure fast and orderly forwarding of low-latency traffic in the IP network (messages are forwarded in the pacing mode as far as possible); when in a larger network, the main time is When the delay is optical fiber propagation, as long as the network needs to ensure normal forwarding as much as possible (that is, forwarding messages in an orderly manner so that the messages should not be gathered together as much as possible), and not forming micro bursts that cause packet loss, it can provide services with lower delay; There is no need to introduce too complicated flow recognition or too much state control.
为了实现本申请实施例的方法,本申请实施例还提供了一种报文传输 装置,设置在第一网络节点上,如图11所示,该装置包括:In order to implement the method of the embodiment of the present application, the embodiment of the present application also provides a message transmission device, which is set on a first network node. As shown in FIG. 11, the device includes:
接收单元111,配置为接收第一报文;The receiving unit 111 is configured to receive the first message;
第一获取单元112,配置为从第一报文中获取第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;The first acquiring unit 112 is configured to acquire a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first message;
第一处理单元113,配置为在获取到所述第一标识的情况下,将所述第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;以及基于确定的出速率,对所述特定队列进行整形,发出所述第一报文;其中,The first processing unit 113 is configured to, when the first identifier is obtained, set the first message in a specific queue; the specific queue is at least used to buffer the delay-sensitive messages to be sent; Determining the incoming rate of messages in the specific queue, determining the outgoing rate of messages in the specific queue; and shaping the specific queue based on the determined outgoing rate, and sending out the first message; wherein,
所述第一网络节点为网络转发节点。The first network node is a network forwarding node.
其中,在一实施例中,所述第一获取单元112,配置为:从所述第一报文中获取SID list;SID list包含流量工程路径对应的多个SID;Wherein, in an embodiment, the first obtaining unit 112 is configured to obtain an SID list from the first message; the SID list includes multiple SIDs corresponding to the traffic engineering path;
相应地,所述第一处理单元113,配置为在确定当前SID指示所述第一报文对应的下一跳网络节点上的特定队列的情况下,将所述第一报文设置在所述特定队列中。Correspondingly, the first processing unit 113 is configured to set the first packet in the first packet when it is determined that the current SID indicates a specific queue on the next-hop network node corresponding to the first packet. In a specific queue.
在一实施例中,所述第一获取单元112,配置为:In an embodiment, the first obtaining unit 112 is configured to:
从所述第一报文中获取前缀SID;Obtain the prefix SID from the first message;
在路由转发表中查找与获取的前缀SID对应的下一跳和出接口;Look up the next hop and outbound interface corresponding to the obtained prefix SID in the routing and forwarding table;
相应地,所述第一处理单元113,配置为在查找到的出接口对应所述特定队列的情况下,将所述第一报文设置在所述特定队列中。Correspondingly, the first processing unit 113 is configured to set the first packet in the specific queue when the found outbound interface corresponds to the specific queue.
在一实施例中,所述第一处理单元113,配置为:In an embodiment, the first processing unit 113 is configured to:
利用所述入速率,并结合队列深度,确定所述特定队列报文的出速率。The ingress rate is combined with the queue depth to determine the outbound rate of packets in the specific queue.
其中,在一实施例中,所述利用所述入速率,并结合队列深度,确定所述特定队列报文的出速率,包括:Wherein, in an embodiment, the using the inbound rate in combination with the queue depth to determine the outbound rate of packets in the specific queue includes:
当队列深度小于阈值时,所述第一处理单元113确定所述出速率为第一速率;所述第一速率小于所述入速率,且所述入速率与所述第一速率的差值小于第一值;When the queue depth is less than the threshold, the first processing unit 113 determines that the out rate is the first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than First value
或者,or,
当队列深度等于阈值时,所述第一处理单元113确定所述出速率为第二速率;所述第二速率等于所述入速率。When the queue depth is equal to the threshold, the first processing unit 113 determines that the out rate is a second rate; the second rate is equal to the in rate.
或者,or,
当所述入速率为零时,所述第一处理单元113确定所述出速率为第三速率;所述第三速率为预设速率、或为最后记录的出速率。When the incoming rate is zero, the first processing unit 113 determines that the outgoing rate is a third rate; the third rate is a preset rate or the last recorded out rate.
实际应用时,所述接收单元111可由报文传输装置中的通信接口实现;所述第一获取单元112及第一处理单元113可由报文传输装置中的处理器实现。In practical applications, the receiving unit 111 can be implemented by a communication interface in a message transmission device; the first acquiring unit 112 and the first processing unit 113 can be implemented by a processor in the message transmission device.
为了实现本申请实施例第二网络节点侧的方法,本申请实施例还提供了一种报文传输装置,设置在第二网络节点上,如图12所示,该装置包括:In order to implement the method on the second network node side of the embodiment of the present application, the embodiment of the present application also provides a message transmission device, which is set on the second network node. As shown in FIG. 12, the device includes:
第二获取单元121,配置为获取第一报文;并确定第一报文对应的业务为时延敏感业务;The second obtaining unit 121 is configured to obtain the first message; and determine that the service corresponding to the first message is a delay-sensitive service;
第二处理单元122,配置为为所述第一报文设置第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;并将设置有第一标识的第一报文设置在特定的队列中进行整形,之后发出设置有第一标识的第一报文。The second processing unit 122 is configured to set a first identifier for the first message; the first identifier represents the delay-sensitive requirement of the first message; and the first message with the first identifier is set. The message is set in a specific queue for shaping, and then the first message with the first identifier is sent.
实际应用时,所述第二获取单元121可由报文传输装置中的处理器结合通信接口实现;所述第二处理单元122可由报文传输装置中的处理器实现。In practical applications, the second acquisition unit 121 may be implemented by a processor in a message transmission device in combination with a communication interface; the second processing unit 122 may be implemented by a processor in the message transmission device.
需要说明的是:上述实施例提供的报文传输装置在进行报文传输时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将装置的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的报文传输装置与报文传输方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。It should be noted that when the message transmission device provided in the above embodiment performs message transmission, only the division of the above-mentioned program modules is used as an example for illustration. In practical applications, the above-mentioned processing can be allocated to different program modules according to needs. Complete, that is, divide the internal structure of the device into different program modules to complete all or part of the processing described above. In addition, the message transmission device provided in the foregoing embodiment and the message transmission method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
基于上述程序模块的硬件实现,且为了实现本申请实施例第一网络节点侧的方法,本申请实施例还提供了一种网络节点,如图13所示,该网络节点130包括:Based on the hardware implementation of the above program modules, and in order to implement the method on the first network node side of the embodiment of the present application, the embodiment of the present application also provides a network node. As shown in FIG. 13, the network node 130 includes:
第一通信接口131,能够与其他网络节点进行信息交互;The first communication interface 131 can exchange information with other network nodes;
第一处理器132,与所述第一通信接口131连接,以实现与其他网络节点进行信息交互,配置为运行计算机程序时,执行上述第一网络节点侧一个或多个技术方案提供的方法。而所述计算机程序存储在第一存储器133上。The first processor 132 is connected to the first communication interface 131 to implement information interaction with other network nodes, and is configured to execute the method provided by one or more technical solutions on the first network node side when it is configured to run a computer program. The computer program is stored in the first memory 133.
具体地,所述第一通信接口131,配置为接收第一报文;Specifically, the first communication interface 131 is configured to receive the first message;
所述第一处理器132,配置为从第一报文中获取第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;在获取到所述第一标识的情况下,将所述第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;以及利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;并基于确定的出速率,对所述特定队列进行整形,并通过所述第一通信接口发出所述第一报文;其中,The first processor 132 is configured to obtain a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first newspaper; when the first identifier is obtained Next, the first message is set in a specific queue; the specific queue is used at least to buffer the delay-sensitive messages to be sent; and the incoming rate of the specific queue message is used to determine the specific queue message And based on the determined output rate, the specific queue is shaped, and the first message is sent out through the first communication interface; wherein,
所述网络节点为网络转发节点。The network node is a network forwarding node.
其中,在一实施例中,所述第一处理器132,配置为:Wherein, in an embodiment, the first processor 132 is configured to:
从所述第一报文中获取SID list;SID list包含流量工程路径对应的多个SID;Obtain the SID list from the first message; the SID list contains multiple SIDs corresponding to the traffic engineering path;
在确定当前SID指示所述第一报文对应的下一跳网络节点上的特定队列的情况下,将所述第一报文设置在所述特定队列中。When it is determined that the current SID indicates a specific queue on the next-hop network node corresponding to the first message, the first message is set in the specific queue.
在一实施例中,所述第一处理器132,配置为:In an embodiment, the first processor 132 is configured to:
从所述第一报文中获取前缀SID;Obtain the prefix SID from the first message;
在路由转发表中查找与获取的前缀SID对应的下一跳和出接口;Look up the next hop and outbound interface corresponding to the obtained prefix SID in the routing and forwarding table;
在查找到的出接口对应所述特定队列的情况下,将所述第一报文设置在所述特定队列中。In a case where the found outgoing interface corresponds to the specific queue, the first packet is set in the specific queue.
在一实施例中,所述第一处理器132,配置为:In an embodiment, the first processor 132 is configured to:
利用所述入速率,并结合队列深度,确定所述特定队列报文的出速率。The ingress rate is combined with the queue depth to determine the outbound rate of packets in the specific queue.
其中,在一实施例中,所述利用所述入速率,并结合队列深度,确定所述特定队列报文的出速率,包括:Wherein, in an embodiment, the using the inbound rate in combination with the queue depth to determine the outbound rate of packets in the specific queue includes:
当队列深度小于阈值时,所述第一处理器132确定所述出速率为第一速率;所述第一速率小于所述入速率,且所述入速率与所述第一速率的差值小于第一值;When the queue depth is less than the threshold, the first processor 132 determines that the out rate is a first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than First value
或者,or,
当队列深度等于阈值时,所述第一处理器132确定所述出速率为第二速率;所述第二速率等于所述入速率。When the queue depth is equal to the threshold, the first processor 132 determines that the out rate is a second rate; the second rate is equal to the in rate.
或者,or,
当所述入速率为零时,所述第一处理器132确定所述出速率为第三速率;所述第三速率为预设速率、或为最后记录的出速率。When the in rate is zero, the first processor 132 determines that the out rate is a third rate; the third rate is a preset rate or the last recorded out rate.
需要说明的是:第一处理器132的具体处理过程可参照上述方法理解。It should be noted that the specific processing process of the first processor 132 can be understood with reference to the foregoing method.
当然,实际应用时,网络节点130中的各个组件通过总线系统134耦合在一起。可理解,总线系统134配置为实现这些组件之间的连接通信。总线系统134除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图13中将各种总线都标为总线系统134。Of course, in actual applications, the various components in the network node 130 are coupled together through the bus system 134. It can be understood that the bus system 134 is configured to implement connection and communication between these components. In addition to the data bus, the bus system 134 also includes a power bus, a control bus, and a status signal bus. However, for clarity of description, various buses are marked as the bus system 134 in FIG. 13.
本申请实施例中的第一存储器133配置为存储各种类型的数据以支持网络节点130的操作。这些数据的示例包括:用于在网络节点130上操作的任何计算机程序。The first memory 133 in the embodiment of the present application is configured to store various types of data to support the operation of the network node 130. Examples of such data include: any computer program used to operate on the network node 130.
上述本申请实施例揭示的方法可以应用于所述第一处理器132中,或者由所述第一处理器132实现。所述第一处理器132可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过所述第一处理器132中的硬件的集成逻辑电路或者软件形式的指令完成。上述的所述第一处理器132可以是通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。所述第一处理器132可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于第一存储器133,所述第一处理器132读取第一存储器133中的信息,结合其硬件完成前述方法的步骤。The methods disclosed in the foregoing embodiments of the present application may be applied to the first processor 132 or implemented by the first processor 132. The first processor 132 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the first processor 132 or instructions in the form of software. The aforementioned first processor 132 may be a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, and the like. The first processor 132 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of the present application. The general-purpose processor may be a microprocessor or any conventional processor or the like. Combining the steps of the method disclosed in the embodiments of the present application, it may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module may be located in a storage medium, and the storage medium is located in the first memory 133. The first processor 132 reads the information in the first memory 133 and completes the steps of the foregoing method in combination with its hardware.
在示例性实施例中,网络节点130可以被一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件 (PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable Gate Array)、通用处理器、控制器、微控制器(MCU,Micro Controller Unit)、微处理器(Microprocessor)、或者其他电子元件实现,用于执行前述方法。In an exemplary embodiment, the network node 130 may be configured by one or more Application Specific Integrated Circuits (ASIC, Application Specific Integrated Circuit), DSP, Programmable Logic Device (PLD, Programmable Logic Device), and Complex Programmable Logic Device (CPLD). , Complex Programmable Logic Device, Field-Programmable Gate Array (FPGA, Field-Programmable Gate Array), general-purpose processor, controller, microcontroller (MCU, Micro Controller Unit), microprocessor (Microprocessor), or other electronics Component implementation, used to perform the aforementioned method.
基于上述程序模块的硬件实现,且为了实现本申请实施例第二网络节点侧的方法,本申请实施例还提供了一种网络节点,如图14所示,该网络节点140包括:Based on the hardware implementation of the above program modules, and in order to implement the method on the second network node side of the embodiment of the present application, the embodiment of the present application also provides a network node. As shown in FIG. 14, the network node 140 includes:
第二通信接口141,能够与其他网络节点进行信息交互;The second communication interface 141 can exchange information with other network nodes;
第二处理器142,与所述第二通信接口141连接,以实现与其他网络节点进行信息交互,配置为运行计算机程序时,执行上述第二网络节点侧一个或多个技术方案提供的方法。而所述计算机程序存储在第二存储器143上。The second processor 142 is connected to the second communication interface 141 to implement information interaction with other network nodes, and is configured to execute the method provided by one or more technical solutions on the second network node side when it is configured to run a computer program. The computer program is stored in the second storage 143.
具体地,所述第二通信接口141,配置为获取第一报文;Specifically, the second communication interface 141 is configured to obtain the first message;
所述第二处理器142,配置为:The second processor 142 is configured to:
确定第一报文对应的业务为时延敏感业务;并将设置有第一标识的第一报文设置在特定的队列中进行整形,之后通过所述第二通信接口141发出设置有第一标识的第一报文。It is determined that the service corresponding to the first message is a delay-sensitive service; and the first message with the first identifier is set in a specific queue for shaping, and then sent through the second communication interface 141 with the first identifier The first message.
需要说明的是:第二处理器142和第二通信接口141的具体处理过程可参照上述方法理解。It should be noted that the specific processing procedures of the second processor 142 and the second communication interface 141 can be understood with reference to the foregoing method.
当然,实际应用时,网络节点140中的各个组件通过总线系统144耦合在一起。可理解,总线系统144配置为实现这些组件之间的连接通信。总线系统144除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图14中将各种总线都标为总线系统144。Of course, in actual applications, the various components in the network node 140 are coupled together through the bus system 144. It can be understood that the bus system 144 is configured to implement connection and communication between these components. In addition to the data bus, the bus system 144 also includes a power bus, a control bus, and a status signal bus. However, for clear description, various buses are marked as the bus system 144 in FIG. 14.
本申请实施例中的第二存储器143配置为存储各种类型的数据以支持网络节点140操作。这些数据的示例包括:用于在网络节点140上操作的任何计算机程序。The second memory 143 in the embodiment of the present application is configured to store various types of data to support the operation of the network node 140. Examples of such data include: any computer program used to operate on the network node 140.
上述本申请实施例揭示的方法可以应用于所述第二处理器142中,或者由所述第二处理器142实现。所述第二处理器142可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过所述第二处理器142中的硬件的集成逻辑电路或者软件形式的指令完成。上述的所述第二处理器142可以是通用处理器、DSP,或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。所述第二处理器142可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于第二存储器143,所述第二处理器142读取第二存储器143中的信息,结合其硬件完成前述方法的步骤。The method disclosed in the foregoing embodiment of the present application may be applied to the second processor 142 or implemented by the second processor 142. The second processor 142 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the second processor 142 or instructions in the form of software. The aforementioned second processor 142 may be a general-purpose processor, a DSP, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The second processor 142 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of the present application. The general-purpose processor may be a microprocessor or any conventional processor or the like. Combining the steps of the method disclosed in the embodiments of the present application, it may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module may be located in a storage medium, and the storage medium is located in the second memory 143. The second processor 142 reads the information in the second memory 143 and completes the steps of the foregoing method in combination with its hardware.
在示例性实施例中,网络节点140可以被一个或多个ASIC、DSP、PLD、CPLD、FPGA、通用处理器、控制器、MCU、Microprocessor、或其他电子元件实现,用于执行前述方法。In an exemplary embodiment, the network node 140 may be implemented by one or more ASICs, DSPs, PLDs, CPLDs, FPGAs, general-purpose processors, controllers, MCUs, Microprocessors, or other electronic components for performing the aforementioned methods.
可以理解,本申请实施例的存储器(第一存储器133、第二存储器143)可以是易失性存储器或者非易失性存储器,也可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,ferromagnetic random access memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本申请实施例描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the memory (the first memory 133, the second memory 143) of the embodiment of the present application may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memory. Among them, the non-volatile memory can be a read-only memory (ROM, Read Only Memory), a programmable read-only memory (PROM, Programmable Read-Only Memory), an erasable programmable read-only memory (EPROM, Erasable Programmable Read- Only Memory, Electrically Erasable Programmable Read-Only Memory (EEPROM), Ferromagnetic Random Access Memory (FRAM), Flash Memory, Magnetic Surface Memory , CD-ROM, or CD-ROM (Compact Disc Read-Only Memory); magnetic surface memory can be magnetic disk storage or tape storage. The volatile memory may be a random access memory (RAM, Random Access Memory), which is used as an external cache. By way of exemplary but not restrictive description, many forms of RAM are available, such as static random access memory (SRAM, Static Random Access Memory), synchronous static random access memory (SSRAM, Synchronous Static Random Access Memory), and dynamic random access memory. Memory (DRAM, Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM, Synchronous Dynamic Random Access Memory), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM, Double Data Rate Synchronous Dynamic Random Access Memory), enhanced Type synchronous dynamic random access memory (ESDRAM, Enhanced Synchronous Dynamic Random Access Memory), synchronous connection dynamic random access memory (SLDRAM, SyncLink Dynamic Random Access Memory), direct memory bus random access memory (DRRAM, Direct Rambus Random Access Memory) ). The memories described in the embodiments of the present application are intended to include, but are not limited to, these and any other suitable types of memories.
为实现本申请实施例的方法,本申请实施例还提供了一种报文传输系统,该系统包括:多个第一网络节点及第二网络节点。In order to implement the method in the embodiment of the present application, the embodiment of the present application also provides a message transmission system. The system includes a plurality of first network nodes and second network nodes.
需要说明的是:第一网络节点及第二网络节点的具体处理过程已在上文详述,这里不再赘述。It should be noted that the specific processing procedures of the first network node and the second network node have been described in detail above, and will not be repeated here.
在示例性实施例中,本申请实施例还提供了一种存储介质,即计算机存储介质,具体为计算机可读存储介质,例如包括存储计算机程序的第一存储器133,上述计算机程序可由网络节点130的第一处理器132执行,以完成前述第一网络节点侧方法所述步骤。再比如包括存储计算机程序的第二存储器143,上述计算机程序可由网络节点140的第二处理器142执行,以完成前述第二网络节点侧方法所述步骤。计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、Flash Memory、磁表面存储器、光盘、或CD-ROM等存储器。In an exemplary embodiment, the embodiment of the present application also provides a storage medium, that is, a computer storage medium, specifically a computer-readable storage medium, such as a first memory 133 storing a computer program, which can be used by the network node 130 Is executed by the first processor 132 to complete the steps described in the foregoing first network node-side method. For another example, a second memory 143 storing a computer program is included. The computer program can be executed by the second processor 142 of the network node 140 to complete the steps described in the second network node-side method. The computer-readable storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM.
需要说明的是:“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。It should be noted that: "first", "second", etc. are used to distinguish similar objects, and not necessarily used to describe a specific sequence or sequence.
另外,本申请实施例所记载的技术方案之间,在不冲突的情况下,可以任意组合。In addition, the technical solutions described in the embodiments of the present application can be combined arbitrarily without conflict.
以上所述,仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。The above are only preferred embodiments of the present application, and are not used to limit the protection scope of the present application.

Claims (11)

  1. 一种报文传输方法,应用于第一网络节点,包括:A message transmission method, applied to a first network node, includes:
    接收第一报文;Receive the first message;
    从第一报文中获取第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;Obtain the first identifier from the first message; the first identifier represents the delay-sensitive demand of the first newspaper;
    在获取到所述第一标识的情况下,将所述第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;In the case of obtaining the first identifier, set the first message in a specific queue; the specific queue is at least used for buffering the delay-sensitive messages to be sent;
    利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;Using the incoming rate of packets in the specific queue to determine the outgoing rate of packets in the specific queue;
    基于确定的出速率,对所述特定队列进行整形,发出所述第一报文;其中,Based on the determined output rate, the specific queue is shaped, and the first packet is sent; wherein,
    所述第一网络节点为网络转发节点。The first network node is a network forwarding node.
  2. 根据权利要求1所述的方法,其中,所述第一标识表征所述第一报文具有独占发送优先级。The method according to claim 1, wherein the first identifier characterizes that the first message has an exclusive sending priority.
  3. 根据权利要求1所述的方法,其中,所述从第一报文中获取第一标识,包括:The method according to claim 1, wherein said obtaining the first identifier from the first message comprises:
    从所述第一报文中获取段身份标识清单SID list;SID list包含流量工程路径对应的多个SID;Obtain a segment identification list SID list from the first message; the SID list contains multiple SIDs corresponding to the traffic engineering path;
    在确定当前SID指示所述第一报文对应的下一跳网络节点上的特定队列的情况下,将所述第一报文设置在所述特定队列中。When it is determined that the current SID indicates a specific queue on the next-hop network node corresponding to the first message, the first message is set in the specific queue.
  4. 根据权利要求1所述的方法,其中,所述从第一报文中获取第一标识,包括:The method according to claim 1, wherein said obtaining the first identifier from the first message comprises:
    从所述第一报文中获取前缀SID;Obtain the prefix SID from the first message;
    在路由转发表中查找与获取的前缀SID对应的出接口;Look up the outbound interface corresponding to the obtained prefix SID in the routing and forwarding table;
    在查找到的出接口对应所述特定队列的情况下,将所述第一报文设置在所述特定队列中。In a case where the found outgoing interface corresponds to the specific queue, the first packet is set in the specific queue.
  5. 根据权利要求1至4任一项所述的方法,其中,所述利用所述特定队列报文的入速率,确定所述特定队列报文的出速率,包括:The method according to any one of claims 1 to 4, wherein the determining the outgoing rate of packets in the specific queue by using the incoming rate of packets in the specific queue comprises:
    利用所述入速率,并结合队列深度,确定所述特定队列报文的出速率。The ingress rate is combined with the queue depth to determine the outbound rate of packets in the specific queue.
  6. 根据权利要求5所述的方法,其中,所述利用所述入速率,并结合队列深度,确定所述特定队列报文的出速率,包括:The method according to claim 5, wherein said determining the outgoing rate of packets in the specific queue by using the ingress rate in combination with the queue depth comprises:
    当队列深度小于阈值时,确定所述出速率为第一速率;所述第一速率小于所述入速率,且所述入速率与所述第一速率的差值小于第一值;When the queue depth is less than the threshold, determine that the out rate is a first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than the first value;
    或者,or,
    当队列深度等于阈值时,确定所述出速率为第二速率;所述第二速率等于所述入速率;When the queue depth is equal to the threshold, it is determined that the out rate is the second rate; the second rate is equal to the in rate;
    或者,or,
    当所述入速率为零时,确定所述出速率为第三速率;所述第三速率为预设速率、或为最后记录的出速率。When the incoming rate is zero, it is determined that the outgoing rate is the third rate; the third rate is the preset rate or the last recorded out rate.
  7. 一种报文传输方法,包括:A message transmission method includes:
    第二网络节点获取第一报文;并确定第一报文对应的业务为时延敏感业务;The second network node obtains the first message; and determines that the service corresponding to the first message is a delay-sensitive service;
    所述第二网络节点为所述第一报文设置第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;The second network node sets a first identifier for the first message; the first identifier represents the delay-sensitive demand of the first message;
    所述第二网络节点将设置有第一标识的第一报文设置在特定的队列中进行整形,之后发出设置有第一标识的第一报文;The second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier;
    第一网络节点接收第一报文,并从接收的第一报文中获取第一标识;The first network node receives the first message, and obtains the first identifier from the received first message;
    所述第一网络节点在获取到第一标识的情况下,将接收的第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;When the first network node obtains the first identifier, set the received first message in a specific queue; the specific queue is at least used to buffer the delay-sensitive messages to be sent;
    所述第一网络节点利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;并基于确定的出速率,对所述特定队列进行整形,发出接收的第一报文;其中,The first network node uses the incoming rate of the specific queue message to determine the outgoing rate of the specific queue message; and based on the determined outgoing rate, the specific queue is shaped, and the received first message is sent ;in,
    所述第二网络节点为网络边缘节点;所述第一网络节点为网络转发节点。The second network node is a network edge node; the first network node is a network forwarding node.
  8. 一种报文传输装置,设置在第一网络节点上,包括:A message transmission device, which is arranged on a first network node, includes:
    接收单元,配置为接收第一报文;The receiving unit is configured to receive the first message;
    第一获取单元,配置为从第一报文中获取第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;The first acquiring unit is configured to acquire a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first message;
    第一处理单元,配置为在获取到所述第一标识的情况下,将所述第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;以及基于确定的出速率,对所述特定队列进行整形,发出所述第一报文;其中,The first processing unit is configured to set the first message in a specific queue when the first identifier is obtained; the specific queue is at least used for buffering the delay-sensitive messages to be sent; The incoming rate of messages in the specific queue is determined, and the outgoing rate of messages in the specific queue is determined; and based on the determined outgoing rate, the specific queue is shaped to send out the first message; wherein,
    所述第一网络节点为网络转发节点。The first network node is a network forwarding node.
  9. 一种网络节点,包括:第一通信接口及第一处理器;其中,A network node includes: a first communication interface and a first processor; wherein,
    所述第一通信接口,配置为接收第一报文;The first communication interface is configured to receive a first message;
    所述第一处理器,配置为从第一报文中获取第一标识;所述第一标识表征所述第一报文具有时延敏感的需求;在获取到所述第一标识的情况下,将所述第一报文设置在特定队列中;所述特定队列至少用于缓存待发送时延敏感报文;以及利用所述特定队列报文的入速率,确定所述特定队列报文的出速率;并基于确定的出速率,对所述特定队列进行整形,并通过所述第一通信接口发出所述第一报文;其中,The first processor is configured to obtain a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first newspaper; when the first identifier is obtained , Setting the first message in a specific queue; the specific queue is at least used for buffering delay-sensitive messages to be sent; and using the incoming rate of the specific queue messages to determine the Out rate; and based on the determined out rate, the specific queue is shaped, and the first message is sent through the first communication interface; wherein,
    所述网络节点为网络转发节点。The network node is a network forwarding node.
  10. 一种网络节点,包括:第一处理器和配置为存储能够在处理器上运行的计算机程序的第一存储器,A network node includes: a first processor and a first memory configured to store a computer program that can run on the processor,
    其中,所述第一处理器配置为运行所述计算机程序时,执行权利要求1 至6任一项所述方法的步骤。Wherein, the first processor is configured to execute the steps of the method according to any one of claims 1 to 6 when running the computer program.
  11. 一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至6任一项所述方法的步骤。A storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 6 are realized.
PCT/CN2021/079756 2020-03-09 2021-03-09 Packet transmission method and device, network node, and storage medium WO2021180073A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010157089.8A CN113382442B (en) 2020-03-09 2020-03-09 Message transmission method, device, network node and storage medium
CN202010157089.8 2020-03-09

Publications (1)

Publication Number Publication Date
WO2021180073A1 true WO2021180073A1 (en) 2021-09-16

Family

ID=77568384

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079756 WO2021180073A1 (en) 2020-03-09 2021-03-09 Packet transmission method and device, network node, and storage medium

Country Status (2)

Country Link
CN (1) CN113382442B (en)
WO (1) WO2021180073A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866453A (en) * 2022-05-18 2022-08-05 中电信数智科技有限公司 Message forwarding method and system based on G-SRv6 protocol
TWI783827B (en) * 2021-12-15 2022-11-11 瑞昱半導體股份有限公司 Wifi device
WO2023130743A1 (en) * 2022-01-10 2023-07-13 中兴通讯股份有限公司 Path calculation method, node, storage medium and computer program product
WO2023155802A1 (en) * 2022-02-15 2023-08-24 大唐移动通信设备有限公司 Data scheduling method, apparatus, device, and storage medium
WO2024016327A1 (en) * 2022-07-22 2024-01-25 新华三技术有限公司 Packet transmission

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115941484A (en) * 2021-09-30 2023-04-07 中兴通讯股份有限公司 Network architecture, network communication method, electronic device and storage medium
CN118140521A (en) * 2021-10-22 2024-06-04 上海诺基亚贝尔股份有限公司 RAN enhancement taking into account CBS behavior in TSCs
CN116264567A (en) * 2021-12-14 2023-06-16 中兴通讯股份有限公司 Message scheduling method, network equipment and computer readable storage medium
CN114257559B (en) * 2021-12-20 2023-08-18 锐捷网络股份有限公司 Data message forwarding method and device
JP2024519555A (en) * 2021-12-29 2024-05-16 新華三技術有限公司 Packet transmission method and network device
CN114726805B (en) * 2022-03-28 2023-11-03 新华三技术有限公司 Message processing method and device
CN117897936A (en) * 2022-08-16 2024-04-16 新华三技术有限公司 Message forwarding method and device
CN115086238B (en) * 2022-08-23 2022-11-22 中国人民解放军国防科技大学 TSN network port output scheduling device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6760309B1 (en) * 2000-03-28 2004-07-06 3Com Corporation Method of dynamic prioritization of time sensitive packets over a packet based network
CN103716255A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Message processing method and device
CN110290072A (en) * 2018-03-19 2019-09-27 华为技术有限公司 Flow control methods, device, the network equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109981457B (en) * 2017-12-27 2021-09-07 华为技术有限公司 Message processing method, network node and system
CN114095422A (en) * 2018-03-29 2022-02-25 华为技术有限公司 Message sending method, network node and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6760309B1 (en) * 2000-03-28 2004-07-06 3Com Corporation Method of dynamic prioritization of time sensitive packets over a packet based network
CN103716255A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Message processing method and device
CN110290072A (en) * 2018-03-19 2019-09-27 华为技术有限公司 Flow control methods, device, the network equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NOKIA, NOKIA SHANGHAI BELL: "Time Sensitive Networking", 3GPP DRAFT; R3-185958 TSN, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG3, no. Chengdu, China; 20181008 - 20181012, 29 September 2018 (2018-09-29), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP051529226 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI783827B (en) * 2021-12-15 2022-11-11 瑞昱半導體股份有限公司 Wifi device
WO2023130743A1 (en) * 2022-01-10 2023-07-13 中兴通讯股份有限公司 Path calculation method, node, storage medium and computer program product
WO2023155802A1 (en) * 2022-02-15 2023-08-24 大唐移动通信设备有限公司 Data scheduling method, apparatus, device, and storage medium
CN114866453A (en) * 2022-05-18 2022-08-05 中电信数智科技有限公司 Message forwarding method and system based on G-SRv6 protocol
CN114866453B (en) * 2022-05-18 2024-01-19 中电信数智科技有限公司 Message forwarding method and system based on G-SRv protocol
WO2024016327A1 (en) * 2022-07-22 2024-01-25 新华三技术有限公司 Packet transmission

Also Published As

Publication number Publication date
CN113382442A (en) 2021-09-10
CN113382442B (en) 2023-01-13

Similar Documents

Publication Publication Date Title
WO2021180073A1 (en) Packet transmission method and device, network node, and storage medium
CN107786465B (en) Method and device for processing low-delay service flow
US11706149B2 (en) Packet sending method, network node, and system
US11968111B2 (en) Packet scheduling method, scheduler, network device, and network system
EP2684321B1 (en) Data blocking system for networks
CN112994961B (en) Transmission quality detection method, device, system and storage medium
US20210083970A1 (en) Packet Processing Method and Apparatus
WO2018149177A1 (en) Packet processing method and apparatus
US20210006502A1 (en) Flow control method and apparatus
WO2015055058A1 (en) Forwarding entry generation method, forwarding node, and controller
WO2021227947A1 (en) Network control method and device
US11038799B2 (en) Per-flow queue management in a deterministic network switch based on deterministically transmitting newest-received packet instead of queued packet
EP3188419B1 (en) Packet storing and forwarding method and circuit, and device
CN111092858B (en) Message processing method, device and system
Park et al. Worst-case analysis of ethernet AVB in automotive system
EP4336795A1 (en) Message transmission method and network device
CN117014384A (en) Message transmission method and message forwarding equipment
CN115460651A (en) Data transmission method and device, readable storage medium and terminal
CN114501544A (en) Data transmission method, device and storage medium
JP7512456B2 (en) Packet scheduling method, scheduler, network device, and network system
WO2023185662A1 (en) Deterministic service method for realizing network underlying resource awareness, and electronic device and computer-readable storage medium
WO2024016327A1 (en) Packet transmission
WO2023241063A1 (en) Packet processing method, device and system, and storage medium
WO2024051367A1 (en) Packet transmission method, network device, and readable storage medium
Cavalieri Estimating KNXnet/IP routing congestion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21768061

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21768061

Country of ref document: EP

Kind code of ref document: A1