CN113382442B - Message transmission method, device, network node and storage medium - Google Patents

Message transmission method, device, network node and storage medium Download PDF

Info

Publication number
CN113382442B
CN113382442B CN202010157089.8A CN202010157089A CN113382442B CN 113382442 B CN113382442 B CN 113382442B CN 202010157089 A CN202010157089 A CN 202010157089A CN 113382442 B CN113382442 B CN 113382442B
Authority
CN
China
Prior art keywords
message
rate
specific queue
queue
network node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010157089.8A
Other languages
Chinese (zh)
Other versions
CN113382442A (en
Inventor
杜宗鹏
耿亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010157089.8A priority Critical patent/CN113382442B/en
Priority to PCT/CN2021/079756 priority patent/WO2021180073A1/en
Publication of CN113382442A publication Critical patent/CN113382442A/en
Application granted granted Critical
Publication of CN113382442B publication Critical patent/CN113382442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/22Negotiating communication rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0231Traffic management, e.g. flow control or congestion control based on communication conditions
    • H04W28/0236Traffic management, e.g. flow control or congestion control based on communication conditions radio quality, e.g. interference, losses or delay

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a message transmission method, a message transmission device, a network node and a storage medium. The method comprises the following steps: a first network node receives a first message; acquiring a first identifier from a first message; the first identifier represents that the first message has a delay sensitive requirement; under the condition that the first identifier is obtained, the first message is arranged in a specific queue; the specific queue is at least used for caching delay sensitive messages to be sent; determining the output rate of the messages in the specific queue by utilizing the input rate of the messages in the specific queue; shaping the specific queue based on the determined output rate to send out the first message; wherein the first network node is a network forwarding node.

Description

Message transmission method, device, network node and storage medium
Technical Field
The present application relates to the field of Internet Protocol (IP) networks, and in particular, to a method and an apparatus for transmitting a packet, a network node, and a storage medium.
Background
At present, in an IP network, a message is forwarded according to the basic idea of Best Effort (BE), so that it is difficult to guarantee deterministic delay, and therefore, in some scenarios of Ultra-Reliable and Low delay Communications (URLLC) of the fifth generation mobile communication technology (5G), the requirement of delay for service deterministic delay cannot BE met.
Disclosure of Invention
In order to solve the related technical problems, embodiments of the present application provide a message transmission method, an apparatus, a network node, and a storage medium.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a message transmission method, which is applied to a first network node and comprises the following steps:
receiving a first message;
acquiring a first identifier from a first message; the first identifier represents that the first message has a delay sensitive requirement;
under the condition that the first identifier is obtained, the first message is arranged in a specific queue; the specific queue is at least used for caching delay sensitive messages to be sent;
determining the output rate of the specific queue message by using the input rate of the specific queue message;
shaping the specific queue based on the determined out-rate, and sending the first message; wherein the first network node is a network forwarding node.
In the foregoing solution, the first identifier indicates that the first packet has an exclusive sending priority.
In the foregoing solution, the obtaining the first identifier from the first packet includes:
acquiring a segment identity list SID from the first message; the SID list comprises a plurality of SIDs corresponding to the traffic engineering path;
and under the condition that the current SID indicates a specific queue on a next hop network node corresponding to the first message, setting the first message in the specific queue.
In the foregoing solution, the obtaining the first identifier from the first packet includes:
acquiring a prefix SID from the first message;
searching an outgoing interface corresponding to the obtained prefix SID in a routing forwarding table;
and under the condition that the searched outbound interface corresponds to the specific queue, setting the first message in the specific queue.
In the foregoing solution, the determining the egress rate of the packet in the specific queue by using the ingress rate of the packet in the specific queue includes:
and determining the output rate of the specific queue message by using the input rate and combining the queue depth.
In the foregoing solution, the determining the egress rate of the packet in the specific queue by using the ingress rate and combining the queue depth includes:
when the queue depth is smaller than a threshold value, determining the out-rate to be a first rate; the first rate is less than the incoming rate, and the difference between the incoming rate and the first rate is less than a first value;
alternatively, the first and second liquid crystal display panels may be,
when the queue depth is equal to a threshold value, determining the rate as a second rate; the second rate is equal to the ingress rate.
Alternatively, the first and second electrodes may be,
when the input rate is zero, determining that the output rate is a third rate; the third rate is a preset rate or a last recorded output rate.
The embodiment of the present application further provides a packet transmission method, including:
the second network node acquires the first message; determining that the service corresponding to the first message is a delay sensitive service;
the second network node sets a first identifier for the first message; the first identifier represents that the first message has the requirement of time delay sensitivity;
the second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier;
a first network node receives a first message and acquires a first identifier from the received first message;
the first network node sets the received first message in a specific queue under the condition of acquiring the first identifier; the specific queue is at least used for caching delay sensitive messages to be sent;
the first network node determines the output rate of the specific queue message by using the input rate of the specific queue message; shaping the specific queue based on the determined output rate, and sending out a received first message; wherein the content of the first and second substances,
the second network node is a network edge node; the first network node is a network forwarding node.
The embodiment of the present application further provides a packet transmission apparatus, which is disposed on a first network node, and includes:
a receiving unit, configured to receive a first packet;
a first obtaining unit, configured to obtain a first identifier from a first packet; the first identifier represents that the first message has the requirement of time delay sensitivity;
the first processing unit is used for setting the first message in a specific queue under the condition of acquiring the first identifier; the specific queue is at least used for caching delay sensitive messages to be sent; determining the output rate of the specific queue message by using the input rate of the specific queue message; shaping the specific queue based on the determined out-rate, and sending the first message; wherein the content of the first and second substances,
the first network node is a network forwarding node.
An embodiment of the present application further provides a network node, including: a first communication interface and a first processor; wherein, the first and the second end of the pipe are connected with each other,
the first communication interface is used for receiving a first message;
the first processor is used for acquiring a first identifier from a first message; the first identifier represents that the first message has a delay sensitive requirement; under the condition that the first identifier is obtained, the first message is arranged in a specific queue; the specific queue is at least used for caching delay sensitive messages to be sent; determining the output rate of the messages in the specific queue by utilizing the input rate of the messages in the specific queue; shaping the specific queue based on the determined output rate, and sending the first message through the first communication interface; wherein the content of the first and second substances,
the network node is a network forwarding node.
An embodiment of the present application further provides a network node, including: a first processor and a first memory for storing a computer program capable of running on the processor,
wherein the first processor is configured to execute the steps of any of the above-mentioned methods at the first network node side when running the computer program.
An embodiment of the present application further provides a storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the above-mentioned methods on the first network node side.
According to the message transmission method, the message transmission device, the network node and the storage medium, the first network node receives a first message; acquiring a first identifier from a first message; the first identifier represents that the first message has a delay sensitive requirement; under the condition that the first identifier is obtained, the first message is arranged in a specific queue; the specific queue is at least used for caching delay sensitive messages to be sent; determining the output rate of the messages in the specific queue by utilizing the input rate of the messages in the specific queue; shaping the specific queue based on the determined output rate, and sending out the first message; the first network node is a network forwarding node, each low-delay service flow is not identified on the network forwarding node, the flow with low delay requirement is integrally identified only according to the characteristics of the packet, the message with low delay requirement is arranged in a specific queue for shaping, certain queue depth in the specific queue is ensured as far as possible, a BE (best effort) forwarding mechanism is not adopted, therefore, the network forwarding node can ensure that low-delay flow is sent in order as required, packet loss and cache delay caused by micro-burst of the message in the network are reduced as far as possible by processing of the network forwarding node, and the delay requirement of the service can BE met.
Drawings
FIG. 1 is a schematic diagram of a deterministic network architecture;
FIG. 2 is a schematic diagram of a micro-burst of an IP device;
FIG. 3 is a schematic diagram of an IP network architecture;
fig. 4 is a schematic flowchart of a method for packet transmission on a second network node side according to an embodiment of the present application;
FIGS. 5a and 5b are schematic diagrams of neighboring SID formats according to embodiments of the present application;
FIG. 6 is a diagram illustrating a prefix SID format according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a method for packet transmission at a first network node side according to an embodiment of the present application;
FIG. 8 is a flowchart illustrating a method for message transmission according to an embodiment of the present application;
fig. 9 is a schematic diagram illustrating a correspondence relationship between an ingress interface and an egress interface of a network forwarding node according to an embodiment of the present application;
FIG. 10 is a diagram illustrating a relationship between a physical interface and a virtual interface according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a message transmission apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of another message transmission apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a network node according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of another network node according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples.
Time Sensitive Networking (TSN) evolved from Audio Video Bridging (AVB) for Audio Video networks, is a protocol set defined by the Institute of Electrical and Electronics Engineers (IEEE), and is mainly used for smaller private networks (Ethernet requiring a 10-100 μ s delay), typically such as in-vehicle networks (which can be understood as networks formed by arranging multiple devices in a vehicle) or industrial control networks), and also defined for larger networks, such as for fronthaul networks, where the main idea is high priority and Packet (expressed in terms of Packet) preemption.
The scheduling mechanism of the TSN network mainly includes the following aspects:
1. credit Based Shaper (CBS): a scheduling mechanism for one queue; the queue can obtain a credit value according to an appointed rate, the data packet can be sent when the credit value is greater than or equal to 0, and the credit value is reduced when the data packet is sent; the effect of this shaper is: shaping packets of a queue, and sending out the packets one by one according to an agreed rate (also called paging); after shaping, the traffic generally coexists with the BE traffic at the sending port, and the traffic needs a higher priority to ensure that the traffic is not interfered by the BE traffic so as to keep the shaping effect of the front queue;
2. time Sensitive Queues (TSQ): the method comprises the steps that a gate control mechanism is adopted, all queues (queues) on equipment use a circular scheduling mechanism (opening and closing of the queues are controlled according to a gate control table with ns as granularity), synchronization among the equipment depends on a Precision Time Protocol (PTP), gate control is accurately opened and closed through cooperation of all the equipment on a path, and the fastest forwarding of TSN flow is supported;
3. transmission preemption mechanism (Transmission preemption): a packet preemption strategy supporting interruption of a low-priority packet being sent by a high-priority packet;
4. ingress scheduling and round-robin queue Forwarding (CQF) mechanism: arriving at the correct time window at the ingress, then ensuring to issue from the egress at the determined time window, using several queues that are cyclically cycled as the issue queues;
5. urgency Based Scheduling (UBS), also known as Asynchronous Traffic Shaping (ATS): the goal is to provide better overall latency than CQFs, with low cost, and without requiring synchronization between devices; the mechanism currently supports two shaping modes, namely a Length Rate proportion (LRQ) mode and a Token Bucket simulation (TBE) mode, and the two modes have similar scheduling effects to CBS and are both suitable for paging of queue traffic;
6. packet Replication and cancellation (PRE, packet Replication and cancellation): multiple transmission and reception, which may also be referred to as Frame Replication and cancellation (FRER).
Deterministic requirements are by no means limited to local two-tier networks, and twenty-multiple authors from different organizations have jointly written deterministic network Use Case (Use Case) manuscripts that address requirements in nine major industries, including: professional audio and video (pro audio & video), electric power companies (electric utilities), building automation systems (building automation systems), industrial wireless for industry, cellular radio (cellular radio), industrial machine to machine communication (industrial M2M), mining (mining industry), private block chain and network slicing (private block chain and network slicing), etc.; at the same time, the demand scenario may be large in size, including nationwide networks, large numbers of devices, and very large distances. Based on this, a Deterministic network (DetNet) is generated, which is a Deterministic network architecture defined in the Internet Engineering Task Force (IETF), and focuses mainly on the determinism of a three-layer network, extending the capabilities of the TSN from two layers to three layers, as shown in fig. 1.
The TSN has the characteristics of small network scale, relatively simple flow model, and support of identification of each flow or network synchronization; therefore, the related mechanism of the TSN is mainly developed for a small-scale network, and in a large-scale network, it is difficult to directly apply the mechanism to an IP forwarding device; moreover, the mechanism associated with TSNs is relatively complex for the handling of deterministic traffic.
On the other hand, currently, a scheduling mechanism of an IP network belongs to a Weighted Round Robin (WRR) class, in the scheduling mechanism, a core idea of packet forwarding is to send out a data packet as fast as possible, and core indexes are a line speed and a throughput rate, where the line speed refers to: for a certain type of messages, for example, a series of messages of 128 bytes (bytes), the device can perform port rate entry and port rate exit during forwarding. Under the current IP forwarding mechanism, as shown in fig. 2, even in a relatively light-load network, the scheduling of the current IP device (i.e., a network node) may generate a certain degree of micro-bursts, that is, data with good paging before scheduling, and packets are collected after scheduling. Therefore, in design philosophy, the scheduling mechanism of IP networks does not match some requirements of TSN and DetNet (100% reliability and deterministic delay must be guaranteed), so that it is difficult to directly apply the scheduling mechanism of TSN in some details.
As can be seen from the above description, IP forwarding as an old mechanism is characterized by statistical multiplexing, low cost, large throughput, and a design concept conforming to the bursty traffic model of IP; the scheduling feature of TSNs as a new mechanism is deterministic, achieving a Constant Bit Rate (CBR) -like scheduling effect for a particular traffic.
Therefore, to support deterministic transport over IP networks, the following operations are avoided:
first, it avoids the requirement for network forwarding nodes (i.e. network nodes that forward packets, also called P nodes, also called P routers, or intermediate routers, as shown in fig. 3), such as operator nodes in the backbone network, to identify traffic on a flow-by-flow basis.
Specifically, it is not possible for a network forwarding node (generally, forwarding pressure is large) to resolve each Flow (Flow), because: network forwarding nodes are generally responsible for forwarding according to packets, and are not suitable for supporting too many flows to identify.
Second, it avoids requiring too much state to be perceived/maintained on the network forwarding nodes.
Specifically, the number of flows on the network forwarding node is large, and if a single flow changes, it is not suggested to continuously change information such as bandwidth reservation of the network forwarding node on the control plane, which also follows the current concept of IP forwarding router design: in IP networks, where there is a lot of bursty traffic, it is suggested that even if there is a change in the access requirement for a certain flow, the network forwarding nodes do not have to be fully aware.
Based on this, in various embodiments of the present application, on a network forwarding node, each low-latency flow is not identified, and only according to the characteristics of a packet, a flow that is required for low latency is identified as a whole, so that low-latency traffic is ensured to be sent as required.
An embodiment of the present application provides a packet transmission method, which is applied to a second network node, and as shown in fig. 4, the method includes:
step 401: the second network node acquires a first message; determining that the service corresponding to the first message is a delay sensitive service;
step 402: the second network node sets a first identifier for the first message;
here, the first identifier represents a requirement that the first packet has a delay sensitivity.
Step 403: and the second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier.
The second network node is a network edge node, and may be referred to as a PE node, a PE router, and the like, for example, an operator edge node in a backbone network.
In practical application, in step 401, when the first packet is accessed from a specific Virtual Local Area Network (VLAN) or a specific interface, or when the first packet is accessed from a specific interface or VLAN and has an agreed priority, the second network node regards a service corresponding to the first packet as a delay-sensitive service, that is, a low-delay service. Of course, other manners may also be used to identify that the service corresponding to the first packet is a delay-sensitive service, which is not limited in this embodiment of the application.
The first identifier represents that the first packet has a delay sensitivity requirement, and it can also be understood that the first identifier represents that the type of the first packet is a delay sensitivity type.
In practical applications, the first identifier may be a priority identifier, and may identify that low-latency traffic occupies a higher exclusive priority, for example, 6.
Based on this, in an embodiment, the first identifier indicates that the first packet has an exclusive sending priority.
In practical applications, the first identifier may also be an adjacency SID (which may be expressed as adj SID in english, and the type of the adjacency SID is an end.x function in SRv 6) of a Segment Routing (SR).
Fig. 5a and 5b show two formats of adjacent SID, and in practical applications, a specific Algorithm, for example 200, may be specified in the part of Algorithm, and is characterized for low-latency traffic transmission.
In practical application, the first identifier may also be a Prefix SID of the SR (english may be expressed as Prefix SID, and the type of the Prefix SID is an End function in SRv 6). Wherein, the prefix SID in the format shown in FIG. 5a is applicable to a Point-to-Point (P2P) connection scenario (Type: 43), and the prefix SID in the format shown in FIG. 5b is applicable to a Local Area Network (LAN) scenario Type: 44).
Fig. 6 shows a format of a prefix SID, and in practical application, the End function is a sub-TLV of a location tag length value (Locator TLV) in an intermediate system to intermediate system (ISIS), and a specific identifier, for example 200, may be specified in a portion of the identifier TLV identifier, which is characterized for low-latency traffic transmission.
In actual application, in step 403, the second network node may perform individual shaping on each accessed delay-sensitive service, so that the messages are sent out one by one at a certain rate (the rate required by the service flow); the second network node may obtain the rate through a certain implementation manner, for example, through a manual/network management configuration manner (i.e., a static configuration manner), or a control plane delivery manner (which may also be understood as a control plane advertisement manner), or a data plane delivery manner (which may also be referred to as a data plane advertisement manner).
The second network node shaping method may use CBS, or Length Rate Quotient (LRQ) of asynchronous shaper (ATS), token Bucket Emulation (TBE), and the like, which is not limited in this embodiment.
Correspondingly, an embodiment of the present application further provides a packet transmission method, which is applied to a first network node, and as shown in fig. 7, the method includes:
step 701: receiving a first message;
step 702: acquiring a first identifier from a first message;
here, the first identifier represents a requirement that the first packet has a delay sensitivity.
Step 703: under the condition that the first identifier is obtained, the first message is arranged in a specific queue;
here, the specific queue is at least used for buffering (also understood as placing) the delay-sensitive message to be sent.
Step 704: determining the output rate of the specific queue message by using the input rate of the specific queue message;
step 705: shaping the specific queue based on the determined out-rate, and then sending out the first message.
The first network node is a network forwarding node, and may be referred to as a P node, a P router, or the like, for example, an operator node in a backbone network.
In practical application, the first network node receives a first packet from a previous hop node, where the previous hop node may be a second network node or a network forwarding node.
In practical application, when the first identifier may be a priority identifier, the first network node configures a special queue for each egress delay-sensitive packet, shapes all delay-sensitive packets uniformly and then sends out the packets, that is, all packets having the exclusive priority are gathered in the queue.
When the first identifier is an adjacent SID of a Segment Routing (SR), the first network node identifies that the current SID is issued by itself, and forwards the packet to a pre-configured specific queue, and uniformly shapes all delay-sensitive packets and then sends out the packets, that is, all packets whose current SID is the adjacent SID are converged into the queue in actual application.
Based on this, in an embodiment, the SID list is obtained from the first packet; the SID list comprises a plurality of SIDs corresponding to the traffic engineering path;
and under the condition that the current SID is determined to indicate a specific queue on a next hop network node corresponding to the first message, setting the first message in the specific queue.
That is, when it is determined that the current SID indicates a specific queue on the next-hop network node corresponding to the first packet, it is determined that the first identifier is acquired.
Here, it can be understood that: for the second network node, the current SID refers to: a SID corresponding to a Destination Address (DA) in the received first message; correspondingly, the next-hop network node corresponding to the first packet means: the network node corresponding to the DA in the first message means: and the second network node receives the first message.
When the first identifier is a prefix SID of an SR, the second network node recognizes that the SID is not issued by itself, and searches for a routing forwarding table according to the SID, where an outgoing interface is in a specific queue configured in advance, so as to forward a message to the specific queue, and in the queue, all delay-sensitive messages are sent out after being uniformly shaped, that is, messages of the specific queue, which are sent out by all outgoing interfaces in the routing table, are converged into the queue in actual application.
Based on this, in an embodiment, a prefix SID is obtained from the first packet;
searching a next hop and an outgoing interface corresponding to the obtained prefix SID in a routing forwarding table;
and under the condition that the searched outbound interface corresponds to the specific queue, setting the first message in the specific queue.
That is, when it is determined that the found egress interface corresponds to the specific queue, it is determined that the first identifier is acquired.
In practical application, the output speed of the packet is limited according to the "packet ingress rate of the destination virtual interface", so that for the whole IP device (i.e., P node), each port sends the low-latency packets according to the speed of the received low-latency packets (i.e., the ingress rate of the virtual interface), and meanwhile, a certain buffer depth (which can also BE understood as having a certain packet buffer size) is ensured, instead of the previous mechanism for forwarding as soon as possible (i.e., BE forwarding mechanism).
Based on this, in an embodiment, the specific implementation of step 705 may include:
and determining the output rate of the queue message by using the input rate and combining the queue depth.
The queue depth may be understood as the number of messages in the queue.
In an embodiment, the determining the egress rate of the packet in the specific queue by using the ingress rate and combining the queue depth includes:
when the queue depth is smaller than a threshold value, determining the out-rate to be a first rate; the first rate is less than the incoming rate, and the difference between the incoming rate and the first rate is less than a first value;
when the queue depth is equal to a threshold value, determining the rate as a second rate; the second rate is equal to the incoming rate.
When the input rate is zero, determining that the output rate is a third rate; the third rate is a preset rate or the last recorded output rate.
Here, in practical application, a statistical period for determining a packet rate may be set, for example, 20 μ s, and the number of packets entering the specific queue within 20 μ s is counted, so as to determine the entry rate.
In practical applications, the first value may be set as needed, as long as the incoming rate is slightly greater than the outgoing rate.
When the queue depth is equal to the threshold and the ingress rate is too large (e.g., exceeds the ingress rate threshold, which may be set as desired), the egress rate may be a fourth rate, which is a set egress rate threshold (which may be set as desired and is less than the ingress rate).
In practical application, the preset rate can be set as required.
The first network node may shape by using a CBS, or an LRQ, a TBE, and the like of an ATS, which is not limited in this embodiment.
An embodiment of the present application further provides a packet transmission method, and as shown in fig. 8, the method includes:
step 801: the second network node acquires the first message; determining that the service corresponding to the first message is a delay sensitive service;
step 802: the second network node sets a first identifier for the first message; the first identifier represents that the first message has the requirement of time delay sensitivity;
step 803: the second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier;
step 804: a first network node receives a first message, acquires a first identifier from the received first message, and sets the received first message in a specific queue under the condition of acquiring the first identifier; the specific queue is at least used for caching delay sensitive messages to be sent;
step 805: the first network node determines the output rate of the specific queue message by using the input rate of the specific queue message; and shaping the specific queue based on the determined output rate, and then sending out the received first message.
Wherein the second network node is a network edge node; the first network node is a network forwarding node.
Here, it should be noted that: the specific processing procedures of the first network node and the second network node have been described in detail above, and are not described herein again.
In the message transmission method provided by the embodiment of the application, a first network node receives a first message; acquiring a first identifier from a first message; the first identifier represents that the first message has the requirement of time delay sensitivity; under the condition that the first identifier is obtained, the first message is arranged in a specific queue; the specific queue is at least used for caching delay sensitive messages to be sent; determining the output rate of the specific queue message by using the input rate of the specific queue message; shaping the specific queue based on the determined output rate, and sending out the first message; the first network node is a network forwarding node, each low-delay service flow is not identified on the network forwarding node, the flow with low delay requirement is integrally identified only according to the characteristics of the packet, the message with low delay requirement is arranged in a specific queue for shaping, certain queue depth in the specific queue is ensured as far as possible, a BE (best effort) forwarding mechanism is not adopted, therefore, the network forwarding node can ensure that low-delay flow is sent in order as required, packet loss and cache delay caused by micro-burst of the message in the network are reduced as far as possible by processing of the network forwarding node, and the delay requirement of the service can BE met.
The present application will be described in further detail with reference to the following application examples.
Application example 1
In this application embodiment, the sending of the low latency traffic uses exclusive priority.
In a general IP forwarding scenario, low-latency traffic and BE traffic use the same destination IP, and at this time, the low-latency traffic needs to BE distinguished, and the low-latency traffic needs to have an exclusive priority.
With reference to fig. 9, the transmission flow of the packet in the embodiment of the present application includes:
step 1: according to the related technology, a network edge node (such as a PE1 node) identifies the flow of a Packet1, shapes a queue where the Packet1 is located, and confirms that a correct Priority, for example Priority 6, is assigned;
here, after performing traffic identification, the Packet1 is identified as a low-latency traffic.
The specific identification and shaping process needs to be carried out according to a pre-configured identification and shaping mode. The shaping may be performed by CBS or ATS.
After the step is completed, the low-delay flow and the BE flow enter the network together.
Step 2: the Packet1 arrives at an input interface Iin1 of a network forwarding node P1, and the network forwarding node P1 finds out an interface Iout3 according to the DA of the Packet 1;
here, iout3 is configured with a specific queue corresponding to Priority 6, where the specific queue serves low-latency traffic and is associated with Priority 6.
The Priority of the Packet1 is Priority 6, so that the egress interface is a specific queue, which is preconfigured on Iout3 and serves for low-latency traffic.
For simplifying the description, it is assumed that the network forwarding node P1 has four interfaces, and one board has only one interface.
And step 3: packet1 arrives at egress interface Iout3 of network forwarding node P1, and for Iout3, all packets of Priority 6 are put into the same queue (i.e., a specific queue) (the source of the Packet may be ingress interfaces Iin1, iin2, iin4, i.e., as long as the Packet with Priority 6 is placed in the specific queue), and shaping (such as CBS, ATS, etc.) is performed according to the overall ingress rate as the egress rate.
Specifically, initially or when the queue depth is low, it may be appropriate to transmit at a slightly higher or lower rate; when the queue depth reaches a certain threshold value, sending according to the input rate = the output rate; and when the input rate =0, the output rate sends out the message of the buffer zone according to a preset fixed rate or the last recorded rate.
In which, a period of appropriate statistical message rate may be set, for example, 20us statistics once.
When the statistical incoming rate is too large, a certain threshold value assignment given rate can be set, forwarding according to the incoming rate is not needed, and the depth of the specific queue is increased at the moment. In general, this situation does not last too long in practical applications because:
firstly, limiting the speed at a network entrance;
second, the micro-bursts of low-latency traffic in the network have been reduced;
third, the low latency traffic ratio is not high throughout the network, and is primarily BE traffic.
That is, the "ingress rate = egress rate" is only a concept, and the specific implementation may adjust the internal parameters according to the network and traffic conditions.
For Iout3, for a specific queue in which all Priority 6 messages are located, scheduling is performed according to a relevant mechanism with other queues (i.e., a queue of BE traffic, for example, priority 0 traffic is BE traffic), and the messages are sent from Iout3, for example, the Priority of low-delay traffic is higher, so that a message will BE sent out as soon as a message exists in the specific queue.
Application example two
In this application embodiment, the low latency traffic uses a special adjacency SID.
In the SR forwarding scenario, the low-latency traffic and the BE traffic may use different adjacent SIDs, so that the low-latency traffic and the BE traffic may BE directly distinguished according to the SID, and at this time, an exclusive priority mode does not need to BE set, but a separate queue is needed, and a higher priority is configured (guaranteed to BE forwarded preferentially).
With reference to fig. 9, the transmission flow of the packet in the embodiment of the present application includes:
step 1: according to the related technology, a network edge node (such as a PE1 node) identifies the flow of a Packet1, shapes a queue where the Packet1 is located, and confirms to print correct SIDs (the SIDs are strict engineering flow (TE) paths and point to specific queue resources on each node);
here, after traffic identification, packet1 is identified as a low-latency traffic.
The specific identification and shaping process needs to be carried out according to a pre-configured identification and shaping mode. The shaping may be performed by CBS or ATS.
After the step is completed, the low-delay flow and the BE flow enter the network together.
And 2, step: the Packet1 arrives at an ingress interface Iin1 of a network forwarding node P1, the current SID is SID13, and the forwarding device finds out a specific Queue3 with an interface Iout3 according to the SID 13;
for simplifying the description, it is assumed that the network forwarding node P1 has four interfaces, and one board has only one interface.
It should be noted that the message of the SID13 matched with the board cards where the device input interfaces Iin2 and Iin4 are located is also sent to Iout3 (SID 13 is an adjacent SID and points to a specific Queue3 of Iout3, that is, the message corresponding to the low delay flow is set in the specific Queue3.
And step 3: the Packet1 arrives at an egress interface Iout3 of the network forwarding node P1, and is sent out by shaping (for example, CBS, ATS, and the like) according to an overall Queue ingress rate as an egress rate for a specific Queue3 of Iout 3.
Here, the specific processing procedure of shaping according to the overall queue entry rate as the exit rate is the same as that in the first application embodiment, and is not described here again.
For Iout3, for a specific Queue3 where all messages of SID13 are located, scheduling with other queues according to an existing mechanism, and sending out from Iout 3.
In practical application, the special adjacent SIDs can be announced to a network for the network to program, so that the requirements of low-delay service are met.
Application example three
In the embodiment of the present application, the low latency service uses a special prefix SID.
In the SR forwarding scenario, the low-latency traffic and the BE traffic may use different prefixes SID, so that the low-latency traffic and the BE traffic may BE directly distinguished according to the SID, at this time, a mode of setting an exclusive priority is not required, but a separate queue is required, and a higher priority is configured (guaranteed to BE forwarded optimally).
With reference to fig. 9, the transmission flow of the message in this embodiment of the application includes:
step 1: performing traffic identification on a Packet1 at a network edge node (such as a PE1 node) according to a correlation technique, shaping a queue where the Packet1 is located, and confirming that a correct prefix SID is printed (in an SRv6 scenario, a location (Locator) corresponding to the SID has a relevant identifier, for example, a Flex Algo ID is used, an outgoing interface of a forwarding entry points to a specific queue resource on each node, the corresponding Locator needs to be issued in advance, and each node generates a relevant forwarding entry, where the Locator is an address part of the SID in the SRv6, and is used for routing to the node which issues the SID);
here, after performing traffic identification, the Packet1 is identified as a low-latency traffic.
The difference from the second application embodiment is that the adjacent SID generally needs to BE specified per hop, and is a label stack (SR-TE), where only one SID (SR-BE) is needed, and is a global label (e.g. corresponding to a certain PE node).
Step 2: the Packet1 arrives at an ingress interface Iin1 of a network forwarding node P1, a forwarding table is searched according to SID9, and an egress interface is a specific Queue3 of Iout3;
for simplifying the description, it is assumed that the network forwarding node P1 has four interfaces, and one board has only one interface.
It should be noted that: other prefix SIDs of the board card where the equipment input interface Iin1 is located are possible to have Queue3 corresponding to the output interface, and the prefix SID is similar to SID9 and represents low-delay flow;
the device ingress interfaces Iin2 and Iin4 may also receive a message of SID9 or a message of other prefix SID, and the Queue corresponding to the egress interface is Queue3.
And then, contracting, and setting the messages corresponding to the low-delay flow in the specific Queue3.
And 3, step 3: the Packet1 arrives at an output interface Iout3 of a forwarding node P1 of the network, and is sent out according to the shaping (such as CBS, ATS, and the like) of a specific Queue3 of the Iout3 by using the overall Queue input rate as an output rate.
Here, the specific processing procedure of shaping according to the overall queue entry rate as the exit rate is the same as that in the first application embodiment, and is not described here again.
For Iout3, queue3 and other queues are scheduled according to the existing mechanism and sent out from Iout 3.
In practical application, the special prefix SID can be announced to a network for network programming, so that the requirements of low-delay service are met.
In addition, each port of each node can be configured with a specific queue to be an interface of the special prefix SID, and the forwarding of low-delay traffic is supported.
As can be seen from the above description, the application embodiment constructs a mechanism for guaranteeing low-latency transmission in a larger IP three-layer network, and specifically includes:
the network edge node identifies the flow and limits the speed;
each port of the network forwarding node monitors the speed of the received low-delay flow, shapes and forwards the low-delay message according to the monitored speed.
As shown in fig. 10, on a network forwarding node, a specific virtual interface is divided on each physical outgoing interface, and corresponds to a specific queue provided for all low-latency flows, and shaping is performed according to the aggregated low-latency flows. Therefore, on the network forwarding node, the low-delay flow can be ensured to be sent as required, the processing of the network forwarding node reduces the micro-burst of the message as much as possible, and a proper buffer depth is maintained for the low-delay flow.
The identifying mechanism is used for identifying the low-delay flow, and specifically comprises the following steps:
the first approach, using exclusive IP/MPLS priority to represent low latency traffic, is applicable in IP/multiprotocol label switching (MPLS) networks.
The second way, using a specific SID to represent low-latency traffic, is applicable in SR networks. The specific SID may specifically BE a neighbor SID of SR-TE or a node SID (i.e., prefix SID) of SR-BE.
By adopting the scheme of the embodiment of the application, a simpler mechanism (1, identification and integral identification of low delay flow; 2, special shaping and forwarding of the low delay flow) is adopted, the formation of micro-bursts in IP forwarding is reduced (the message can not form bursts due to forwarding as soon as possible), and the rapid and ordered forwarding of the low delay flow in an IP network is ensured (the message is forwarded according to a paging mode as much as possible); when the main time delay is optical fiber transmission in a larger network, as long as the network needs to ensure normal transmission as far as possible (namely, messages are transmitted in order so that the messages are not gathered together as far as possible), and the service with lower time delay can be provided without packet loss caused by micro-burst; without introducing overly complex flow identification, or excessive state control.
In order to implement the method according to the embodiment of the present application, an embodiment of the present application further provides a packet transmission apparatus, which is disposed on a first network node, and as shown in fig. 11, the apparatus includes:
a receiving unit 111, configured to receive a first packet;
a first obtaining unit 112, configured to obtain a first identifier from the first packet; the first identifier represents that the first message has a delay sensitive requirement;
a first processing unit 113, configured to set the first packet in a specific queue when the first identifier is obtained; the specific queue is at least used for caching delay sensitive messages to be sent; determining the output rate of the specific queue message by using the input rate of the specific queue message; shaping the specific queue based on the determined out-rate, and sending the first message; wherein the content of the first and second substances,
the first network node is a network forwarding node.
In an embodiment, the first obtaining unit 112 is specifically configured to: obtaining SID list from the first message; the SID list comprises a plurality of SIDs corresponding to the traffic engineering path;
accordingly, the first processing unit 113 is configured to, in a case that it is determined that the current SID indicates a specific queue on a next-hop network node corresponding to the first packet, set the first packet in the specific queue.
In an embodiment, the first obtaining unit 112 is specifically configured to:
acquiring a prefix SID from the first message;
searching a next hop and an outgoing interface corresponding to the obtained prefix SID in a routing forwarding table;
correspondingly, the first processing unit 113 is configured to set the first packet in the specific queue under the condition that the found egress interface corresponds to the specific queue.
In an embodiment, the first processing unit 113 is specifically configured to:
and determining the output rate of the specific queue message by using the input rate and combining the queue depth.
In an embodiment, the determining the egress rate of the packet in the specific queue by using the ingress rate and combining with the queue depth includes:
when the queue depth is less than the threshold, the first processing unit 113 determines that the out-rate is a first rate; the first rate is less than the incoming rate, and the difference between the incoming rate and the first rate is less than a first value;
alternatively, the first and second electrodes may be,
when the queue depth is equal to the threshold, the first processing unit 113 determines that the out-rate is the second rate; the second rate is equal to the incoming rate.
Alternatively, the first and second liquid crystal display panels may be,
when the incoming rate is zero, the first processing unit 113 determines that the outgoing rate is a third rate; the third rate is a preset rate or the last recorded output rate.
In practical application, the receiving unit 111 may be implemented by a communication interface in a message transmission device; the first obtaining unit 112 and the first processing unit 113 may be implemented by a processor in a message transmission device.
In order to implement the method at the second network node side in the embodiment of the present application, an embodiment of the present application further provides a packet transmission apparatus, which is disposed on the second network node, and as shown in fig. 12, the apparatus includes:
a second obtaining unit 121, configured to obtain the first packet; determining that the service corresponding to the first message is a delay sensitive service;
a second processing unit 122, configured to set a first identifier for the first packet; the first identifier represents that the first message has the requirement of time delay sensitivity; and the first message with the first identifier is arranged in a specific queue for shaping, and then the first message with the first identifier is sent out.
In practical application, the second obtaining unit 121 may be implemented by a processor in a message transmission device in combination with a communication interface; the second processing unit 122 may be implemented by a processor in a message transmission device.
It should be noted that: in the message transmission device provided in the foregoing embodiment, when data transmission is performed, only the division of the program modules is illustrated, and in practical applications, the processing distribution may be completed by different program modules according to needs, that is, the internal structure of the device is divided into different program modules, so as to complete all or part of the processing described above. In addition, the message transmission apparatus and the message transmission method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
Based on the hardware implementation of the program module, and in order to implement the method on the first network node side in the embodiment of the present application, an embodiment of the present application further provides a network node, as shown in fig. 13, where the network node 130 includes:
a first communication interface 131, which is capable of performing information interaction with other network nodes;
the first processor 132 is connected to the first communication interface 131 to implement information interaction with other network nodes, and is configured to execute a method provided by one or more technical solutions of the first network node side when running a computer program. And the computer program is stored on the first memory 133.
Specifically, the first communication interface 131 is configured to receive a first message;
the first processor 132 is configured to obtain a first identifier from the first packet; the first identifier represents that the first message has a delay sensitive requirement; under the condition that the first identifier is obtained, the first message is arranged in a specific queue; the specific queue is at least used for caching delay sensitive messages to be sent; determining the output rate of the specific queue message by using the input rate of the specific queue message; shaping the specific queue based on the determined output rate, and sending the first message through the first communication interface; wherein the content of the first and second substances,
the network node is a network forwarding node.
In an embodiment, the first processor 132 is specifically configured to:
obtaining SID list from the first message; the SID list comprises a plurality of SIDs corresponding to the traffic engineering path;
and under the condition that the current SID is determined to indicate a specific queue on a next hop network node corresponding to the first message, setting the first message in the specific queue.
In an embodiment, the first processor 132 is specifically configured to:
acquiring a prefix SID from the first message;
searching a next hop and an outgoing interface corresponding to the obtained prefix SID in a routing forwarding table;
and under the condition that the searched outbound interface corresponds to the specific queue, setting the first message in the specific queue.
In an embodiment, the first processor 132 is specifically configured to:
and determining the output rate of the specific queue message by using the input rate and combining the queue depth.
In an embodiment, the determining the egress rate of the packet in the specific queue by using the ingress rate and combining with the queue depth includes:
when the queue depth is less than the threshold, the first processor 132 determines the out-rate to be a first rate; the first rate is less than the incoming rate, and the difference between the incoming rate and the first rate is less than a first value;
alternatively, the first and second electrodes may be,
when the queue depth is equal to the threshold, the first processor 132 determines the out-rate to be the second rate; the second rate is equal to the incoming rate.
Alternatively, the first and second electrodes may be,
when the in-rate is zero, the first processor 132 determines that the out-rate is a third rate; the third rate is a preset rate or a last recorded output rate.
It should be noted that: the specific processing procedure of the first processor 132 can be understood with reference to the above method.
Of course, in practice, the various components in the network node 130 are coupled together by a bus system 134. It will be appreciated that the bus system 134 is used to enable communications among the components. The bus system 134 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are identified in FIG. 13 as the bus system 134.
The first memory 133 in the embodiments of the present application is used to store various types of data to support the operation of the network node 130. Examples of such data include: any computer program for operating on the network node 130.
The method disclosed in the embodiment of the present application may be applied to the first processor 132, or implemented by the first processor 132. The first processor 132 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the first processor 132. The first Processor 132 may be a general-purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, etc. The first processor 132 may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in a storage medium located in the first memory 133, and the first processor 132 reads the information in the first memory 133 and, in conjunction with its hardware, performs the steps of the foregoing method.
In an exemplary embodiment, the network node 130 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the aforementioned methods.
Based on the hardware implementation of the program module, and in order to implement the method at the second network node side in the embodiment of the present application, an embodiment of the present application further provides a network node, as shown in fig. 14, where the network node 140 includes:
a second communication interface 141 capable of performing information interaction with other network nodes;
the second processor 142 is connected to the second communication interface 141 to implement information interaction with other network nodes, and is configured to execute the method provided by one or more technical solutions of the second network node side when running a computer program. And the computer program is stored on the second memory 143.
Specifically, the second communication interface 141 is configured to obtain a first packet;
the second processor 142 is configured to:
determining that the service corresponding to the first message is a delay sensitive service; and the first packet with the first identifier is set in a specific queue for shaping, and then the first packet with the first identifier is sent out through the second communication interface 141.
It should be noted that: the specific processes of the second processor 142 and the second communication interface 141 can be understood by referring to the above-described methods.
Of course, in practice, the various components in the network node 140 are coupled together by a bus system 144. It will be appreciated that the bus system 144 is used to enable communications among the components of the connection. The bus system 144 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 144 in fig. 14.
The second memory 143 in the embodiments of the present application is used to store various types of data to support the operation of the network node 140. Examples of such data include: any computer program for operating on the network node 140.
The method disclosed in the embodiment of the present application may be applied to the second processor 142, or implemented by the second processor 142. The second processor 142 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by an integrated logic circuit of hardware or an instruction in the form of software in the second processor 142. The second processor 142 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The second processor 142 may implement or perform the methods, steps and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in a storage medium located in the second memory 143, and the second processor 142 reads the information in the second memory 143, and completes the steps of the foregoing method in combination with its hardware.
In an exemplary embodiment, the network node 140 may be implemented by one or more ASICs, DSPs, PLDs, CPLDs, FPGAs, general-purpose processors, controllers, MCUs, microprocessors, or other electronic components for performing the aforementioned methods.
It is understood that the memories (the first memory 133 and the second memory 143) of the embodiments of the present application may be volatile memories or nonvolatile memories, and may include both volatile and nonvolatile memories. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a magnetic random access Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), synchronous Static Random Access Memory (SSRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced Synchronous Dynamic Random Access Memory), synchronous link Dynamic Random Access Memory (DRAM, synchronous Dynamic Random Access Memory), direct Memory (DRmb Random Access Memory). The memories described in the embodiments of the present application are intended to comprise, without being limited to, these and any other suitable types of memory.
In order to implement the method of the embodiment of the present application, an embodiment of the present application further provides a packet transmission system, where the system includes: a plurality of first network nodes and a second network node.
It should be noted that: the specific processing procedures of the first network node and the second network node are described in detail above, and are not described herein again.
In an exemplary embodiment, the present application further provides a storage medium, specifically a computer storage medium, for example, a first memory 133 storing a computer program, which is executable by the first processor 132 of the network node 130 to complete the steps of the foregoing first network node side method. For example, the second memory 143 stores a computer program, which can be executed by the second processor 142 of the network node 140 to perform the steps of the second network node side method. The computer readable storage medium may be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash Memory, magnetic surface Memory, optical disk, or CD-ROM.
It should be noted that: "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The technical means described in the embodiments of the present application may be arbitrarily combined without conflict.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.

Claims (10)

1. A message transmission method is applied to a first network node and comprises the following steps:
receiving a first message;
acquiring a first identifier from a first message; the first identifier represents that the first message has the requirement of time delay sensitivity;
under the condition that the first identifier is obtained, the first message is arranged in a specific queue; the specific queue is at least used for caching delay sensitive messages to be sent;
determining the output rate of the specific queue message by using the input rate of the specific queue message;
shaping the specific queue based on the determined out-rate, and sending the first message; wherein the content of the first and second substances,
the first network node is a network forwarding node;
the determining the output rate of the specific queue message by using the input rate of the specific queue message includes:
and determining the output rate of the message in the specific queue by utilizing the input rate and combining the queue depth.
2. The method of claim 1, wherein the first identifier indicates that the first packet has an exclusive transmission priority.
3. The method of claim 1, wherein the obtaining the first identifier from the first packet comprises:
acquiring a segment identity list SID from the first message; the SID list comprises a plurality of SIDs corresponding to the traffic engineering path;
and under the condition that the current SID is determined to indicate a specific queue on a next hop network node corresponding to the first message, setting the first message in the specific queue.
4. The method of claim 1, wherein the obtaining the first identifier from the first packet comprises:
acquiring a prefix SID from the first message;
searching an outgoing interface corresponding to the obtained prefix SID in a routing forwarding table;
and under the condition that the searched outbound interface corresponds to the specific queue, setting the first message in the specific queue.
5. The method according to any one of claims 1 to 4, wherein the determining the egress rate of the packet in the specific queue by using the ingress rate in combination with a queue depth comprises:
when the depth of the queue is smaller than a threshold value, determining the output rate as a first rate; the first rate is less than the entry rate, and the difference between the entry rate and the first rate is less than a first value;
alternatively, the first and second electrodes may be,
when the queue depth is equal to a threshold value, determining the out-rate to be a second rate; the second rate is equal to the incoming rate;
alternatively, the first and second electrodes may be,
when the input rate is zero, determining that the output rate is a third rate; the third rate is a preset rate or a last recorded output rate.
6. A method for packet transmission, comprising:
the second network node acquires the first message; determining that the service corresponding to the first message is a delay sensitive service;
the second network node sets a first identifier for the first message; the first identifier represents that the first message has the requirement of time delay sensitivity;
the second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier;
a first network node receives a first message and acquires a first identifier from the received first message;
the first network node sets the received first message in a specific queue under the condition of acquiring the first identifier; the specific queue is at least used for caching delay sensitive messages to be sent;
the first network node determines the output rate of the specific queue message by using the input rate of the specific queue message; shaping the specific queue based on the determined output rate, and sending out a received first message; wherein the content of the first and second substances,
the second network node is a network edge node; the first network node is a network forwarding node;
the determining, by the first network node, the egress rate of the specific queue packet by using the ingress rate of the specific queue packet includes:
and the first network node determines the output rate of the message in the specific queue by using the input rate and combining the queue depth.
7. A message transmission apparatus, disposed on a first network node, comprising:
a receiving unit, configured to receive a first packet;
a first obtaining unit, configured to obtain a first identifier from a first packet; the first identifier represents that the first message has the requirement of time delay sensitivity;
the first processing unit is used for setting the first message in a specific queue under the condition of acquiring the first identifier; the specific queue is at least used for caching delay sensitive messages to be sent; determining the output rate of the specific queue message by using the input rate of the specific queue message; shaping the specific queue based on the determined out-rate, and sending the first message; wherein, the first and the second end of the pipe are connected with each other,
the first network node is a network forwarding node;
and the first processing unit is used for determining the output rate of the message in the specific queue by using the input rate and combining the queue depth.
8. A network node, comprising: a first communication interface and a first processor; wherein the content of the first and second substances,
the first communication interface is used for receiving a first message;
the first processor is used for acquiring a first identifier from a first message; the first identifier represents that the first message has the requirement of time delay sensitivity; under the condition that the first identifier is obtained, the first message is arranged in a specific queue; the specific queue is at least used for caching delay sensitive messages to be sent; determining the output rate of the specific queue message by using the input rate of the specific queue message; shaping the specific queue based on the determined output rate, and sending the first message through the first communication interface; wherein the content of the first and second substances,
the network node is a network forwarding node;
and the first processor is used for determining the output rate of the specific queue message by utilizing the input rate and combining the queue depth.
9. A network node, comprising: a first processor and a first memory for storing a computer program capable of running on the processor,
wherein the first processor is adapted to perform the steps of the method of any one of claims 1 to 5 when running the computer program.
10. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, implementing the steps of the method of any one of claims 1 to 5.
CN202010157089.8A 2020-03-09 2020-03-09 Message transmission method, device, network node and storage medium Active CN113382442B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010157089.8A CN113382442B (en) 2020-03-09 2020-03-09 Message transmission method, device, network node and storage medium
PCT/CN2021/079756 WO2021180073A1 (en) 2020-03-09 2021-03-09 Packet transmission method and device, network node, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010157089.8A CN113382442B (en) 2020-03-09 2020-03-09 Message transmission method, device, network node and storage medium

Publications (2)

Publication Number Publication Date
CN113382442A CN113382442A (en) 2021-09-10
CN113382442B true CN113382442B (en) 2023-01-13

Family

ID=77568384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010157089.8A Active CN113382442B (en) 2020-03-09 2020-03-09 Message transmission method, device, network node and storage medium

Country Status (2)

Country Link
CN (1) CN113382442B (en)
WO (1) WO2021180073A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115941484A (en) * 2021-09-30 2023-04-07 中兴通讯股份有限公司 Network architecture, network communication method, electronic device and storage medium
WO2023065283A1 (en) * 2021-10-22 2023-04-27 Nokia Shanghai Bell Co., Ltd. Ran enhancement taking into account cbs behaviour in tsc
CN116264567A (en) * 2021-12-14 2023-06-16 中兴通讯股份有限公司 Message scheduling method, network equipment and computer readable storage medium
TWI783827B (en) * 2021-12-15 2022-11-11 瑞昱半導體股份有限公司 Wifi device
CN114257559B (en) * 2021-12-20 2023-08-18 锐捷网络股份有限公司 Data message forwarding method and device
WO2023123104A1 (en) * 2021-12-29 2023-07-06 新华三技术有限公司 Message transmission method and network device
CN116455804A (en) * 2022-01-10 2023-07-18 中兴通讯股份有限公司 Path calculation method, node, storage medium, and computer program product
CN116647894A (en) * 2022-02-15 2023-08-25 大唐移动通信设备有限公司 Data scheduling method, device, equipment and storage medium
CN114726805B (en) * 2022-03-28 2023-11-03 新华三技术有限公司 Message processing method and device
CN114866453B (en) * 2022-05-18 2024-01-19 中电信数智科技有限公司 Message forwarding method and system based on G-SRv protocol
WO2024016327A1 (en) * 2022-07-22 2024-01-25 新华三技术有限公司 Packet transmission
CN117897936A (en) * 2022-08-16 2024-04-16 新华三技术有限公司 Message forwarding method and device
CN115086238B (en) * 2022-08-23 2022-11-22 中国人民解放军国防科技大学 TSN network port output scheduling device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103716255A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Message processing method and device
CN109981457A (en) * 2017-12-27 2019-07-05 华为技术有限公司 A kind of method of Message processing, network node and system
CN110290072A (en) * 2018-03-19 2019-09-27 华为技术有限公司 Flow control methods, device, the network equipment and storage medium
CN110324242A (en) * 2018-03-29 2019-10-11 华为技术有限公司 A kind of method, network node and system that message is sent

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6760309B1 (en) * 2000-03-28 2004-07-06 3Com Corporation Method of dynamic prioritization of time sensitive packets over a packet based network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103716255A (en) * 2012-09-29 2014-04-09 华为技术有限公司 Message processing method and device
CN109981457A (en) * 2017-12-27 2019-07-05 华为技术有限公司 A kind of method of Message processing, network node and system
CN110290072A (en) * 2018-03-19 2019-09-27 华为技术有限公司 Flow control methods, device, the network equipment and storage medium
CN110324242A (en) * 2018-03-29 2019-10-11 华为技术有限公司 A kind of method, network node and system that message is sent

Also Published As

Publication number Publication date
CN113382442A (en) 2021-09-10
WO2021180073A1 (en) 2021-09-16

Similar Documents

Publication Publication Date Title
CN113382442B (en) Message transmission method, device, network node and storage medium
EP3624408B1 (en) Method for generating forwarding table entry, controller, and network device
US11706149B2 (en) Packet sending method, network node, and system
CN107786465B (en) Method and device for processing low-delay service flow
CN112994961B (en) Transmission quality detection method, device, system and storage medium
US11394646B2 (en) Packet sending method, network node, and system
US11722407B2 (en) Packet processing method and apparatus
CN112019433B (en) Message forwarding method and device
JP7231749B2 (en) Packet scheduling method, scheduler, network device and network system
US7602809B2 (en) Reducing transmission time for data packets controlled by a link layer protocol comprising a fragmenting/defragmenting capability
CN106453138B (en) Message processing method and device
US9065764B2 (en) Method, apparatus and system for maintaining quality of service QoS
CN111092858B (en) Message processing method, device and system
CN114124781B (en) Method and system for forwarding message in SRv, electronic equipment and storage medium
JPWO2009037732A1 (en) Communication device in label switching network
EP4336795A1 (en) Message transmission method and network device
WO2005079022A1 (en) Packet communication network, route control server, route control method, packet transmission device, admission control server, light wavelength path setting method, program, and recording medium
WO2024001733A1 (en) Packet transmission method, apparatus, and system
KR101445466B1 (en) Source based queue selection mechanism in the routing environment
Waqar et al. QoS assurance for PON-based fronthaul and backhaul systems of 5G cloud radio access networks
CN116938719A (en) Deterministic service method, equipment and medium for realizing network bottom layer resource perception
CN117527667A (en) Service function chain processing method and device
CN115150313A (en) Method, device, storage medium and system for sending message and generating route
CN117527668A (en) Data transmission method, device, network equipment and storage medium
Axer et al. Requirements on real-time-capable automotive ethernet architectures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant