WO2021180073A1 - Dispositif et procédé de transmission de paquet, nœud de réseau, et support de stockage - Google Patents

Dispositif et procédé de transmission de paquet, nœud de réseau, et support de stockage Download PDF

Info

Publication number
WO2021180073A1
WO2021180073A1 PCT/CN2021/079756 CN2021079756W WO2021180073A1 WO 2021180073 A1 WO2021180073 A1 WO 2021180073A1 CN 2021079756 W CN2021079756 W CN 2021079756W WO 2021180073 A1 WO2021180073 A1 WO 2021180073A1
Authority
WO
WIPO (PCT)
Prior art keywords
rate
message
specific queue
identifier
network node
Prior art date
Application number
PCT/CN2021/079756
Other languages
English (en)
Chinese (zh)
Inventor
杜宗鹏
耿亮
Original Assignee
中国移动通信有限公司研究院
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国移动通信有限公司研究院, 中国移动通信集团有限公司 filed Critical 中国移动通信有限公司研究院
Publication of WO2021180073A1 publication Critical patent/WO2021180073A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/22Negotiating communication rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0231Traffic management, e.g. flow control or congestion control based on communication conditions
    • H04W28/0236Traffic management, e.g. flow control or congestion control based on communication conditions radio quality, e.g. interference, losses or delay

Definitions

  • This application relates to the field of Internet Protocol (IP, Internet Protocol) networks, and in particular to a method, device, network node, and storage medium for message transmission.
  • IP Internet Protocol
  • embodiments of the present application provide a message transmission method, device, network node, and storage medium.
  • the embodiment of the present application provides a message transmission method, which is applied to a first network node, and includes:
  • the first identifier represents the delay-sensitive demand of the first newspaper
  • the specific queue is at least used for buffering the delay-sensitive messages to be sent;
  • the specific queue is shaped, and the first packet is sent;
  • the first network node is a network forwarding node.
  • the first identifier indicates that the first message has an exclusive sending priority.
  • the obtaining the first identifier from the first message includes:
  • SIDs segment identities
  • the first message is set in the specific queue.
  • the obtaining the first identifier from the first message includes:
  • the first packet is set in the specific queue.
  • the using the incoming rate of packets in the specific queue to determine the outgoing rate of packets in the specific queue includes:
  • the ingress rate is combined with the queue depth to determine the outbound rate of packets in the specific queue.
  • the use of the incoming rate in combination with the queue depth to determine the outgoing rate of packets in the specific queue includes:
  • the out rate is a first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than the first value;
  • the outgoing rate is the third rate; the third rate is the preset rate or the last recorded out rate.
  • the embodiment of the present application also provides a message transmission method, including:
  • the second network node obtains the first message; and determines that the service corresponding to the first message is a delay-sensitive service;
  • the second network node sets a first identifier for the first message; the first identifier represents the delay-sensitive demand of the first message;
  • the second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier;
  • the first network node receives the first message, and obtains the first identifier from the received first message;
  • the first network node When the first network node obtains the first identifier, set the received first message in a specific queue; the specific queue is at least used to buffer the delay-sensitive messages to be sent;
  • the first network node uses the incoming rate of the specific queue message to determine the outgoing rate of the specific queue message; and based on the determined outgoing rate, the specific queue is shaped, and the received first message is sent ;in,
  • the second network node is a network edge node; the first network node is a network forwarding node.
  • An embodiment of the present application also provides a message transmission device, which is set on a first network node, and includes:
  • the receiving unit is configured to receive the first message
  • the first acquiring unit is configured to acquire a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first message;
  • the first processing unit is configured to set the first message in a specific queue when the first identifier is obtained; the specific queue is at least used for buffering the delay-sensitive messages to be sent; The incoming rate of messages in the specific queue is determined, and the outgoing rate of messages in the specific queue is determined; and based on the determined outgoing rate, the specific queue is shaped to send out the first message; wherein,
  • the first network node is a network forwarding node.
  • the embodiment of the present application also provides a network node, including: a first communication interface and a first processor; wherein,
  • the first communication interface is configured to receive a first message
  • the first processor is configured to obtain a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first newspaper; when the first identifier is obtained , Setting the first message in a specific queue; the specific queue is at least used for buffering delay-sensitive messages to be sent; and using the incoming rate of the specific queue messages to determine the Out rate; and based on the determined out rate, the specific queue is shaped, and the first message is sent through the first communication interface; wherein,
  • the network node is a network forwarding node.
  • An embodiment of the present application also provides a network node, including: a first processor and a first memory configured to store a computer program that can run on the processor,
  • the first processor is configured to execute the steps of any method on the side of the first network node when running the computer program.
  • the embodiment of the present application also provides a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of any method on the first network node side described above are implemented.
  • the first network node receives the first message; obtains the first identifier from the first message; the first identifier represents the first Sending stationery to delay-sensitive requirements; in the case of obtaining the first identifier, set the first message in a specific queue; the specific queue is at least used for buffering delay-sensitive messages to be sent; Using the incoming rate of messages in the specific queue to determine the outgoing rate of messages in the specific queue; based on the determined outgoing rate, shaping the specific queue to send out the first message; wherein, the first message
  • a network node is a network forwarding node. On the network forwarding node, each low-latency service flow is not recognized.
  • the network forwarding node can ensure that low-latency traffic is sent in an orderly manner on demand, and the network forwarding node's The processing minimizes the packet loss and buffering delay caused by the micro-burst of packets in the network, and can meet the delay requirements of the business.
  • Figure 1 is a schematic diagram of a deterministic network architecture
  • Figure 2 is a schematic diagram of a micro burst of an IP device
  • FIG. 3 is a schematic diagram of an IP network architecture
  • FIG. 4 is a schematic flowchart of a method for message transmission on the side of a second network node in an embodiment of the present application
  • 5a and 5b are schematic diagrams of adjacent SID formats according to an embodiment of this application.
  • Fig. 6 is a schematic diagram of a prefix SID format according to an embodiment of the application.
  • FIG. 7 is a schematic flowchart of a method for message transmission on the side of a first network node in an application embodiment
  • FIG. 8 is a flowchart of a method for message transmission according to an application embodiment
  • FIG. 9 is a schematic diagram of the corresponding relationship between the inbound interface and the outbound interface of a network forwarding node according to an application embodiment of this application;
  • FIG. 10 is a schematic diagram of the relationship between a physical interface and a virtual interface according to an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of a message transmission device according to an embodiment of the application.
  • FIG. 12 is a schematic structural diagram of another message transmission device according to an embodiment of the application.
  • FIG. 13 is a schematic diagram of a network node structure according to an embodiment of this application.
  • FIG. 14 is a schematic diagram of another network node structure according to an embodiment of the application.
  • Time Sensitive Networking evolved from the audio and video bridging (AVB, Ethernet Audio/Video Bridging) used for audio and video networks. It is a protocol set defined by the Institute of Electrical and Electronics Engineers (IEEE). Mainly used for small dedicated networks (Ethernet with a delay of 10 to 100 ⁇ s), such as in-vehicle networks (which can be understood as a network formed by multiple devices installed in a vehicle) or industrial control Network), which has also been defined for larger networks, for example, for fronthaul networks. The main idea is high priority and packet preemption (expressed as Packet in the text).
  • the scheduling mechanism of TSN network mainly includes the following aspects:
  • CBS Credit-based shaper
  • a scheduling mechanism for a queue the queue will get the credit value at the agreed rate, and the credit value is greater than or equal to 0 to send data packets.
  • the credit value will be reduced; the effect of this shaper is: to shape the packets in the queue and send them one by one at the agreed rate (also called pacing); after shaping, this kind of traffic will generally coexist with BE traffic on the sending port, and This kind of traffic requires a higher priority to ensure that it is not interfered by BE traffic, so as to maintain the effect of the previous queue shaping;
  • Time Sensitive Queues (TSQ, Time Sensitive Queues): Using a gating mechanism, all queues on the device use a circular scheduling mechanism (based on a gating table with a granularity of ns to control the opening and closing of the queue), The synchronization between devices relies on the Precision Time Protocol (PTP), through the coordination of each device on the path, the gate can be opened and closed accurately, and the fastest forwarding of TSN traffic is supported;
  • PTP Precision Time Protocol
  • Transmission preemption packet preemption strategy, supports high-priority packets to interrupt the low-priority packets being sent;
  • Ingress scheduling and cyclic queue forwarding (CQF, Input scheduling and Cyclic Queuing and Forwarding) mechanism arrive at the correct time window at the entrance, and then ensure that it is sent from the exit at a certain time window, using several queues that cycle periodically as the sending queue ;
  • CQF Input scheduling and Cyclic Queuing and Forwarding
  • Urgency-based scheduling also known as Asynchronous Traffic Shaping (ATS, Asynchronous Traffic Shaping):
  • ATS Asynchronous Traffic Shaping
  • the mechanism currently supports two shaping methods, namely, Length Rate Ratio (LRQ, Length Rate Quotient) and Token Bucket Emulation (TBE, Token Bucket Emulation).
  • LRQ Length Rate Ratio
  • TBE Token Bucket Emulation
  • the scheduling effects of these two methods are similar to CBS. , Are applicable to the pacing of queue traffic;
  • Packet Replication and Elimination Multiple transmission and selective reception, also called Frame Replication and Elimination (FRER, Frame Replication and Elimination).
  • Deterministic demand is by no means limited to the local two-layer network. More than 20 authors from different organizations have jointly written a Use Case manuscript, which elaborates the needs in nine major industries, including: professional audio and video ( pro audio&video), electrical utilities, building automation systems, wireless for industrial, cellular radio, industrial machine-to-machine communication (industrial M2M), mining (mining industry) , Private blockchain and network slicing (private blockchain and network slicing); at the same time, the scale of demand scenarios may be very large, including a national network, a large number of equipment and ultra-long distances. Based on this, Deterministic Networking (DetNet, Deterministic Networking) was created. DetNet is a deterministic network architecture defined by the Internet Engineering Task Force (IETF). It focuses on the determinism of the three-layer network and expands the capabilities of TSN from the second layer. Go to the third floor, as shown in Figure 1.
  • IETF Internet Engineering Task Force
  • TSN is characterized by its small network scale and relatively simple traffic model, which supports the identification of each stream, or network synchronization; therefore, the related mechanism of TSN is mainly developed for small-scale networks. In large-scale networks, it is difficult to directly It is applied to IP forwarding equipment; moreover, the processing mechanism of deterministic traffic is relatively complicated by the relevant mechanism of TSN.
  • the current scheduling mechanism of IP networks belongs to the weighted round-robin (WRR) type.
  • WRR weighted round-robin
  • the core idea of packet forwarding is to send data packets as quickly as possible.
  • the core indicators are line speed and Throughput rate.
  • the line rate refers to: for a certain type of message, such as a series of 128 bytes (bytes), the device can achieve port rate entry and port rate transmission when forwarding.
  • the current scheduling of IP devices ie network nodes
  • the current scheduling of IP devices ie network nodes
  • the current scheduling of IP devices ie network nodes
  • the current scheduling of IP devices ie network nodes
  • pacing is good before scheduling.
  • the messages will be gathered together. Therefore, in the design concept, the scheduling mechanism of IP network does not match some requirements of TSN and DetNet (100% reliability and deterministic delay must be guaranteed), so it is difficult to directly apply TSN scheduling in some details. mechanism.
  • the characteristics of IP forwarding as the old mechanism are statistical multiplexing, low cost, large throughput, and the design concept conforms to the burst traffic model of IP; the scheduling characteristic of TSN as the new mechanism is determinism. For specific traffic, it achieves a scheduling effect similar to constant bit rate (CBR).
  • CBR constant bit rate
  • Network forwarding nodes that is, network nodes that forward packets, which can also be called P nodes, or P routers, or intermediate routers, as shown in Figure 3
  • P nodes network nodes that forward packets
  • P routers or intermediate routers, as shown in Figure 3
  • the quotient node to identify the business flow by flow.
  • the network forwarding node (generally, the forwarding pressure is high) cannot be used to analyze each flow (Flow). This is because: the network forwarding node is generally responsible for forwarding according to the packet, which is not suitable for supporting too many flows. Identification work.
  • the number of flows on the network forwarding node is large. If a single flow changes, it is not recommended to constantly change the bandwidth reservation of the network forwarding node on the control plane. This also follows the current design concept of IP forwarding routers: In the IP network, there are many bursts of traffic. At this time, it is recommended that even if a certain/some flow access requirements change, the network forwarding nodes do not have to perceive them all.
  • each low-latency flow is not identified, and only the flow with low-latency requirements is identified as a whole based on the characteristics of the packet, so as to ensure low-latency Traffic is sent on demand.
  • the embodiment of the present application provides a message transmission method, which is applied to a second network node. As shown in FIG. 4, the method includes:
  • Step 401 The second network node obtains the first message; and determines that the service corresponding to the first message is a delay-sensitive service;
  • Step 402 The second network node sets a first identifier for the first message
  • the first identifier represents the time delay sensitive demand of the first newspaper.
  • Step 403 The second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier.
  • the second network node is a network edge node, which may be referred to as a PE node, a PE router, etc., such as an operator edge node in a backbone network.
  • step 401 when the first packet is accessed from a specific virtual local area network (VLAN) or from a specific interface, or when the first packet is accessed from a certain If a specific interface or VLAN is accessed and has an agreed priority, the second network node considers the service corresponding to the first message to be a delay-sensitive service, that is, a low-latency service.
  • a delay-sensitive service that is, a low-latency service.
  • other methods may also be used to identify that the service corresponding to the first message is a delay-sensitive service, which is not limited in the embodiment of the present application.
  • the first identifier characterizes the delay-sensitive requirement of the first newspaper and stationery, and can also be understood as the first identifier characterizing that the type of the first message is a delay-sensitive type.
  • the first identifier may be a priority identifier, which may identify that low-latency traffic occupies a higher exclusive priority, such as 6.
  • the first identifier indicates that the first packet has an exclusive sending priority.
  • the first identifier may also be an adjacent SID of a segment routing (SR) (it can be expressed as adj SID in English, and the type of adjacent SID is the End.X function in SRv6).
  • SR segment routing
  • Figures 5a and 5b show the formats of two adjacent SIDs. In practical applications, you can specify a specific algorithm in the algorithm section, such as 200, which is used for low-latency traffic transmission.
  • the first identifier may also be a prefix SID of an SR (English may be expressed as Prefix SID, and the type of the prefix SID is End function in SRv6).
  • the prefix SID in the format shown in FIG. 5a is suitable for a point-to-point (P2P) connection scenario (Type: 43), and the prefix SID in the format shown in FIG. 5b is suitable for a local area network (LAN) scenario (Type: 44).
  • P2P point-to-point
  • LAN local area network
  • Figure 6 shows the format of a prefix SID.
  • the End function when it is released, it is a sub-TLV of the Locator TLV in the Intermediate System to Intermediate System (ISIS). Part, you can specify a specific algorithm, such as 200, which is used for low-latency traffic transmission.
  • ISIS Intermediate System to Intermediate System
  • the second network node may perform separate shaping for each access delay-sensitive service, so that the packets are sent out one by one at a certain rate (the rate required by the service flow);
  • the second network node can obtain the rate through a certain implementation method, for example, through manual/network management configuration (ie static configuration), or control plane transfer mode (which can also be understood as control plane notification), or Data plane transfer method (also called data plane notification method).
  • the method for shaping the second network node may use CBS, or LRQ, TBE of ATS, etc., which is not limited in the embodiment of the present application.
  • an embodiment of the present application also provides a message transmission method, which is applied to a first network node, as shown in FIG. 7, including:
  • Step 701 Receive the first message
  • Step 702 Obtain the first identifier from the first message
  • the first identifier represents the time delay sensitive demand of the first newspaper.
  • Step 703 When the first identifier is obtained, set the first message in a specific queue;
  • the specific queue is at least used for buffering (also can be understood as placing) delay-sensitive messages to be sent.
  • Step 704 Use the incoming rate of packets in the specific queue to determine the outgoing rate of packets in the specific queue;
  • Step 705 Shape the specific queue based on the determined output rate, and then send the first message.
  • the first network node is a network forwarding node, which may be called a P node, a P router, etc., such as an operator node in a backbone network.
  • the first network node receives the first message from a previous hop node
  • the previous hop node may be a second network node or a network forwarding node.
  • the first network node configures a special queue to uniformly shape all delay-sensitive messages Send out later, that is, all packets with the exclusive priority will be aggregated into this queue.
  • the first network node When the first identifier is an adjacent SID of an SR, the first network node recognizes that the current SID is issued by itself, and forwards the message to a pre-configured specific queue to unify all delay-sensitive messages Send after shaping, that is, in actual application, all the packets whose current SID is the adjacent SID will be aggregated into this queue.
  • the SID list is obtained from the first message; the SID list includes multiple SIDs corresponding to the traffic engineering path;
  • the first message is set in the specific queue.
  • the current SID refers to: the SID corresponding to the destination address (DA) in the received first message; accordingly, the first message
  • the next-hop network node corresponding to the text refers to the network node corresponding to the DA in the first packet, that is, the second network node that receives the first packet.
  • the second network node When the first identifier is a prefix SID of an SR, the second network node recognizes that the SID is not issued by itself, searches the routing and forwarding table according to the SID, and the outbound interface is in a pre-configured specific queue, thereby forwarding the message To a specific queue, in this queue, all delay-sensitive packets will be uniformly shaped and sent out. That is, in actual applications, all packets whose outbound interface is the specific queue in the routing table will be aggregated into this queue.
  • the prefix SID is obtained from the first message
  • the first packet is set in the specific queue.
  • the output speed of the message is limited according to the "incoming rate of the packet of the destination virtual interface".
  • each port is based on the received low-latency traffic
  • the speed of the message that is, the incoming rate of the virtual interface
  • send these low-latency traffic messages while ensuring a certain buffer (buffer) depth (can also be understood as a certain message buffer size), rather than according to the previous
  • the mechanism of forwarding as soon as possible ie BE forwarding mechanism).
  • step 705 may include:
  • the queue depth can be understood as the number of packets in the queue.
  • the using the inbound rate in combination with the queue depth to determine the outbound rate of packets in the specific queue includes:
  • the out rate is a first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than the first value;
  • the outgoing rate is the third rate; the third rate is the preset rate or the last recorded out rate.
  • a statistical period for determining the message rate can be set, for example, 20 ⁇ s, counting the number of messages entering the specific queue within 20 ⁇ s, and determining the incoming rate based on this.
  • the first value can be set according to needs, as long as the incoming rate is slightly greater than the outgoing rate.
  • the outbound rate may be the fourth rate, and the fourth rate is the set outbound rate threshold (It can be set as required, and is less than the input rate).
  • the preset rate can be set as required.
  • the shaping method of the first network node may adopt CBS, or LRQ, TBE of ATS, etc., which is not limited in the embodiment of the present application.
  • the embodiment of the present application also provides a message transmission method. As shown in FIG. 8, the method includes:
  • Step 801 The second network node obtains the first message; and determines that the service corresponding to the first message is a delay-sensitive service;
  • Step 802 The second network node sets a first identifier for the first message; the first identifier represents the delay-sensitive demand of the first message;
  • Step 803 The second network node sets the first message with the first identifier in a specific queue for shaping, and then sends the first message with the first identifier;
  • Step 804 The first network node receives the first message, and obtains the first identifier from the received first message, and when the first identifier is obtained, sets the received first message in a specific queue;
  • the specific queue is at least used for buffering delay-sensitive messages to be sent;
  • Step 805 The first network node uses the incoming rate of the specific queue message to determine the outgoing rate of the specific queue message; and based on the determined outgoing rate, the specific queue is shaped, and then the received The first message.
  • the second network node is a network edge node; the first network node is a network forwarding node.
  • a first network node receives a first message; obtains a first identifier from the first message; and the first identifier represents the delay-sensitive requirement of the first message
  • the first message is set in a specific queue; the specific queue is at least used for buffering delay-sensitive messages to be sent; using the specific queue message Incoming rate, determining the outgoing rate of messages in the specific queue; based on the determined outgoing rate, shaping the specific queue, and sending out the first message;
  • the first network node is a network forwarding node; On the network forwarding node, each low-latency service flow is not identified, and only the flow with low-latency requirements is identified as a whole based on the characteristics of the packet, and the packets with low-latency requirements are set in a specific queue for shaping.
  • the network forwarding node can ensure that low-latency traffic is sent in an orderly manner, and the processing of the network forwarding node minimizes the number of packets in the network
  • the packet loss and buffer delay caused by the micro-burst can meet the delay requirements of the business.
  • the transmission of low-latency services uses exclusive priority.
  • low-latency traffic and BE traffic use the same destination IP. At this time, low-latency traffic needs to be distinguished, and these low-latency traffic need exclusive priority.
  • the message transmission process of this application embodiment includes:
  • Step 1 The network edge node (such as the PE1 node) identifies the flow of the packet packet Packet1 according to the related technology, and shapes the queue where the packet packet Packet1 is located, and confirms that the correct priority is assigned, the exclusive priority, such as Priority 6;
  • the packet packet Packet1 is a low-latency flow.
  • Shaping can be done by CBS or ATS.
  • Step 2 The packet packet Packet1 arrives at the inbound interface Iin1 of the network forwarding node P1, and the network forwarding node P1 searches for the outbound interface as Iout3 according to the DA of the packet packet Packet1;
  • a specific queue corresponding to Priority 6 is configured on Iout3. This specific queue serves low-latency traffic, and this specific queue is associated with Priority 6.
  • the priority of the packet packet Packet1 is Priority 6. Therefore, the outbound interface is a specific queue pre-configured on Iout3 to serve low-latency traffic.
  • the network forwarding node P1 has four interfaces, and a board has only one interface.
  • the processing methods for multiple interfaces of a board are similar.
  • Step 3 The packet packet Packet1 arrives at the outbound interface Iout3 of the network forwarding node P1.
  • Iout3 all Priority 6 packets are placed in the same queue (ie a specific queue) (the source of the packet may be the inbound interface Iin1, Iin2, Iin4, That is, as long as the packets with priority 6 are all placed in the specific queue), they are shaped (such as CBS, ATS, etc.) according to the overall incoming rate as the outgoing rate.
  • an appropriate period of packet rate statistics can be set, for example, statistics are performed once every 20 ⁇ s.
  • the speed limit is set at the network entrance
  • the proportion of low-latency traffic at this time is not high in the entire network, mainly BE traffic.
  • Iout3 For the specific queue where all Priority 6 packets are located, it is scheduled with other queues (that is, the queue of BE traffic, for example, the traffic of Priority 0 is BE traffic) according to the relevant mechanism, and sent from Iout3, such as low-latency traffic
  • the priority is higher, so a message will be sent out as soon as there is a specific queue.
  • the low-latency service uses a dedicated adjacent SID.
  • the message transmission process of this application embodiment includes:
  • Step 1 The network edge node (such as the PE1 node) identifies the flow of the packet packet Packet1 according to related technologies, and shapes the queue where the packet packet Packet1 is located, and confirms that the correct SID list is marked (these SIDs are strict Engineering traffic (TE) path, and point to a specific queue resource on each node);
  • TE strict Engineering traffic
  • the packet packet Packet1 is a low-latency flow.
  • Shaping can be done by CBS or ATS.
  • Step 2 The packet packet Packet1 arrives at the inbound interface Iin1 of the network forwarding node P1, the current SID is SID13, and the forwarding device searches for the specific queue Queue3 whose outbound interface is Iout3 according to SID13;
  • the network forwarding node P1 has four interfaces, and a board has only one interface.
  • the processing methods for multiple interfaces of a board are similar.
  • Step 3 The packet packet Packet1 arrives at the outbound interface Iout3 of the network forwarding node P1, and the specific queue Queue3 of Iout3 is shaped (such as CBS, ATS, etc.) according to the overall queue ingress rate as the outbound rate.
  • the specific queue Queue3 of Iout3 is shaped (such as CBS, ATS, etc.) according to the overall queue ingress rate as the outbound rate.
  • the specific queue Queue3 where all SID13 messages are located is scheduled with other queues according to the existing mechanism and sent from Iout3.
  • these special adjacent SIDs can be notified to the network for network programming to meet the demands of low-latency services.
  • the low-latency service uses a special prefix SID.
  • SIDs can be used for low-latency traffic and BE traffic, so that low-latency traffic and BE traffic can be distinguished directly according to SID.
  • SID short-latency traffic
  • BE traffic can be distinguished directly according to SID.
  • the message transmission process of this application embodiment includes:
  • Step 1 At the network edge node (such as the PE1 node), according to the related technology, identify the traffic of the packet packet 1, and shape the queue where the packet packet 1 is located, and confirm that the correct prefix SID is marked (in the SRv6 scenario
  • the location (Locator) corresponding to the SID has a related identifier. For example, Flex Algo ID is used.
  • the outbound interface of its forwarding entry points to a specific queue resource on each node.
  • the node generates a related forwarding entry, and the Locator is the address part of the SID in SRv6, which is used to route to the node that publishes the SID);
  • the packet packet Packet1 is a low-latency flow.
  • the difference from Application Example 2 is that the adjacent SID generally needs to be specified per hop, and is a label stack (SR-TE), here only one SID (SR-BE) is required, which is a global label (for example, corresponding to a PE node) .
  • SR-TE label stack
  • SR-BE SID
  • SR-BE global label
  • Step 2 The packet packet Packet1 arrives at the inbound interface Iin1 of the network forwarding node P1, looks up the forwarding table according to SID9, and the outbound interface is the specific queue Queue3 of Iout3;
  • the network forwarding node P1 has four interfaces, and a board has only one interface.
  • the processing methods for multiple interfaces of a board are similar.
  • prefix SID of the board where the inbound interface Iin1 of the device is located may also correspond to the queue of the outbound interface as Queue3. This prefix SID is similar to SID9 and represents low-latency traffic;
  • the inbound interfaces Iin2 and Iin4 of the device may also receive packets with SID9 or other prefixed SID packets.
  • the queue corresponding to the outbound interface is Queue3.
  • the packets corresponding to the low-latency traffic will be set in the specific queue Queue3.
  • Step 3 The packet packet Packet1 arrives at the outbound interface Iout3 of the forwarding node P1 of the network.
  • the overall queue ingress rate is used as the outbound rate for shaping (such as CBS, ATS, etc.) to be sent.
  • Queue3 and other queues are scheduled according to the existing mechanism and sent from Iout3.
  • these special prefix SIDs can be advertised to the network for network programming to meet the demands of low-latency services.
  • each port of each node can be configured with a specific queue, which becomes the interface of these special prefix SIDs, and supports the forwarding of low-latency traffic.
  • the application embodiment constructs a mechanism to ensure low-latency transmission in a larger IP three-layer network, which specifically includes:
  • Network edge nodes perform traffic identification and speed limit
  • Each port of the network forwarding node monitors the speed of the received low-latency traffic, shapes and forwards low-latency packets according to the monitored speed.
  • each physical outbound interface is divided into a specific virtual interface, corresponding to a specific queue provided for all low-latency traffic, and according to the aggregated low-latency traffic , Perform plastic surgery.
  • low-latency traffic can be guaranteed to be sent on demand, and the processing of the network forwarding node minimizes the micro-burst of messages, and maintains a suitable buffer depth for low-latency flows.
  • the identification mechanism used in the above mechanism to identify low-latency traffic includes:
  • the first method uses exclusive IP/Multiprotocol Label Switching (MPLS) priority to represent low-latency traffic, which is suitable for IP/MPLS networks.
  • MPLS Multiprotocol Label Switching
  • the second method using a specific SID to represent low-latency traffic, is suitable for SR networks.
  • the specific SID may specifically be the adjacent SID of the SR-TE or the node SID of the SR-BE (that is, the prefix SID).
  • a relatively simple mechanism (1. identification and overall identification of low-latency traffic; 2. special shaping and forwarding of low-latency traffic) is adopted to reduce the formation of micro-bursts in IP forwarding (Messages will not form bursts due to being forwarded as soon as possible) to ensure fast and orderly forwarding of low-latency traffic in the IP network (messages are forwarded in the pacing mode as far as possible); when in a larger network, the main time is When the delay is optical fiber propagation, as long as the network needs to ensure normal forwarding as much as possible (that is, forwarding messages in an orderly manner so that the messages should not be gathered together as much as possible), and not forming micro bursts that cause packet loss, it can provide services with lower delay; There is no need to introduce too complicated flow recognition or too much state control.
  • the embodiment of the present application also provides a message transmission device, which is set on a first network node. As shown in FIG. 11, the device includes:
  • the receiving unit 111 is configured to receive the first message
  • the first acquiring unit 112 is configured to acquire a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first message;
  • the first processing unit 113 is configured to, when the first identifier is obtained, set the first message in a specific queue; the specific queue is at least used to buffer the delay-sensitive messages to be sent; Determining the incoming rate of messages in the specific queue, determining the outgoing rate of messages in the specific queue; and shaping the specific queue based on the determined outgoing rate, and sending out the first message; wherein,
  • the first network node is a network forwarding node.
  • the first obtaining unit 112 is configured to obtain an SID list from the first message; the SID list includes multiple SIDs corresponding to the traffic engineering path;
  • the first processing unit 113 is configured to set the first packet in the first packet when it is determined that the current SID indicates a specific queue on the next-hop network node corresponding to the first packet. In a specific queue.
  • the first obtaining unit 112 is configured to:
  • the first processing unit 113 is configured to set the first packet in the specific queue when the found outbound interface corresponds to the specific queue.
  • the first processing unit 113 is configured to:
  • the ingress rate is combined with the queue depth to determine the outbound rate of packets in the specific queue.
  • the using the inbound rate in combination with the queue depth to determine the outbound rate of packets in the specific queue includes:
  • the first processing unit 113 determines that the out rate is the first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than First value
  • the first processing unit 113 determines that the out rate is a second rate; the second rate is equal to the in rate.
  • the first processing unit 113 determines that the outgoing rate is a third rate; the third rate is a preset rate or the last recorded out rate.
  • the receiving unit 111 can be implemented by a communication interface in a message transmission device; the first acquiring unit 112 and the first processing unit 113 can be implemented by a processor in the message transmission device.
  • the embodiment of the present application also provides a message transmission device, which is set on the second network node. As shown in FIG. 12, the device includes:
  • the second obtaining unit 121 is configured to obtain the first message; and determine that the service corresponding to the first message is a delay-sensitive service;
  • the second processing unit 122 is configured to set a first identifier for the first message; the first identifier represents the delay-sensitive requirement of the first message; and the first message with the first identifier is set.
  • the message is set in a specific queue for shaping, and then the first message with the first identifier is sent.
  • the second acquisition unit 121 may be implemented by a processor in a message transmission device in combination with a communication interface; the second processing unit 122 may be implemented by a processor in the message transmission device.
  • the message transmission device provided in the above embodiment performs message transmission
  • only the division of the above-mentioned program modules is used as an example for illustration.
  • the above-mentioned processing can be allocated to different program modules according to needs.
  • Complete that is, divide the internal structure of the device into different program modules to complete all or part of the processing described above.
  • the message transmission device provided in the foregoing embodiment and the message transmission method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • the embodiment of the present application also provides a network node.
  • the network node 130 includes:
  • the first communication interface 131 can exchange information with other network nodes;
  • the first processor 132 is connected to the first communication interface 131 to implement information interaction with other network nodes, and is configured to execute the method provided by one or more technical solutions on the first network node side when it is configured to run a computer program.
  • the computer program is stored in the first memory 133.
  • the first communication interface 131 is configured to receive the first message
  • the first processor 132 is configured to obtain a first identifier from a first message; the first identifier represents the delay-sensitive requirement of the first newspaper; when the first identifier is obtained Next, the first message is set in a specific queue; the specific queue is used at least to buffer the delay-sensitive messages to be sent; and the incoming rate of the specific queue message is used to determine the specific queue message And based on the determined output rate, the specific queue is shaped, and the first message is sent out through the first communication interface; wherein,
  • the network node is a network forwarding node.
  • the first processor 132 is configured to:
  • the SID list contains multiple SIDs corresponding to the traffic engineering path
  • the first message is set in the specific queue.
  • the first processor 132 is configured to:
  • the first packet is set in the specific queue.
  • the first processor 132 is configured to:
  • the ingress rate is combined with the queue depth to determine the outbound rate of packets in the specific queue.
  • the using the inbound rate in combination with the queue depth to determine the outbound rate of packets in the specific queue includes:
  • the first processor 132 determines that the out rate is a first rate; the first rate is less than the in rate, and the difference between the in rate and the first rate is less than First value
  • the first processor 132 determines that the out rate is a second rate; the second rate is equal to the in rate.
  • the first processor 132 determines that the out rate is a third rate; the third rate is a preset rate or the last recorded out rate.
  • bus system 134 is configured to implement connection and communication between these components.
  • bus system 134 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the bus system 134 in FIG. 13.
  • the first memory 133 in the embodiment of the present application is configured to store various types of data to support the operation of the network node 130. Examples of such data include: any computer program used to operate on the network node 130.
  • the methods disclosed in the foregoing embodiments of the present application may be applied to the first processor 132 or implemented by the first processor 132.
  • the first processor 132 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the first processor 132 or instructions in the form of software.
  • the aforementioned first processor 132 may be a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, and the like.
  • the first processor 132 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or any conventional processor or the like.
  • the software module may be located in a storage medium, and the storage medium is located in the first memory 133.
  • the first processor 132 reads the information in the first memory 133 and completes the steps of the foregoing method in combination with its hardware.
  • the network node 130 may be configured by one or more Application Specific Integrated Circuits (ASIC, Application Specific Integrated Circuit), DSP, Programmable Logic Device (PLD, Programmable Logic Device), and Complex Programmable Logic Device (CPLD). , Complex Programmable Logic Device, Field-Programmable Gate Array (FPGA, Field-Programmable Gate Array), general-purpose processor, controller, microcontroller (MCU, Micro Controller Unit), microprocessor (Microprocessor), or other electronics Component implementation, used to perform the aforementioned method.
  • ASIC Application Specific Integrated Circuit
  • DSP Programmable Logic Device
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • MCU Microcontroller
  • Microprocessor Microprocessor
  • the embodiment of the present application also provides a network node.
  • the network node 140 includes:
  • the second communication interface 141 can exchange information with other network nodes
  • the second processor 142 is connected to the second communication interface 141 to implement information interaction with other network nodes, and is configured to execute the method provided by one or more technical solutions on the second network node side when it is configured to run a computer program.
  • the computer program is stored in the second storage 143.
  • the second communication interface 141 is configured to obtain the first message
  • the second processor 142 is configured to:
  • the service corresponding to the first message is a delay-sensitive service; and the first message with the first identifier is set in a specific queue for shaping, and then sent through the second communication interface 141 with the first identifier The first message.
  • bus system 144 is configured to implement connection and communication between these components.
  • bus system 144 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the bus system 144 in FIG. 14.
  • the second memory 143 in the embodiment of the present application is configured to store various types of data to support the operation of the network node 140. Examples of such data include: any computer program used to operate on the network node 140.
  • the method disclosed in the foregoing embodiment of the present application may be applied to the second processor 142 or implemented by the second processor 142.
  • the second processor 142 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the second processor 142 or instructions in the form of software.
  • the aforementioned second processor 142 may be a general-purpose processor, a DSP, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like.
  • the second processor 142 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or any conventional processor or the like.
  • the software module may be located in a storage medium, and the storage medium is located in the second memory 143.
  • the second processor 142 reads the information in the second memory 143 and completes the steps of the foregoing method in combination with its hardware.
  • the network node 140 may be implemented by one or more ASICs, DSPs, PLDs, CPLDs, FPGAs, general-purpose processors, controllers, MCUs, Microprocessors, or other electronic components for performing the aforementioned methods.
  • the memory (the first memory 133, the second memory 143) of the embodiment of the present application may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memory.
  • the non-volatile memory can be a read-only memory (ROM, Read Only Memory), a programmable read-only memory (PROM, Programmable Read-Only Memory), an erasable programmable read-only memory (EPROM, Erasable Programmable Read- Only Memory, Electrically Erasable Programmable Read-Only Memory (EEPROM), Ferromagnetic Random Access Memory (FRAM), Flash Memory, Magnetic Surface Memory , CD-ROM, or CD-ROM (Compact Disc Read-Only Memory); magnetic surface memory can be magnetic disk storage or tape storage.
  • the volatile memory may be a random access memory (RAM, Random Access Memory), which is used as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • SSRAM synchronous static random access memory
  • Synchronous Static Random Access Memory Synchronous Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM Enhanced Synchronous Dynamic Random Access Memory
  • SLDRAM synchronous connection dynamic random access memory
  • DRRAM Direct Rambus Random Access Memory
  • the memories described in the embodiments of the present application are intended to include, but are not limited to, these and any other suitable types of memories.
  • the embodiment of the present application also provides a message transmission system.
  • the system includes a plurality of first network nodes and second network nodes.
  • the embodiment of the present application also provides a storage medium, that is, a computer storage medium, specifically a computer-readable storage medium, such as a first memory 133 storing a computer program, which can be used by the network node 130 Is executed by the first processor 132 to complete the steps described in the foregoing first network node-side method.
  • a second memory 143 storing a computer program is included.
  • the computer program can be executed by the second processor 142 of the network node 140 to complete the steps described in the second network node-side method.
  • the computer-readable storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente demande divulgue un procédé et un dispositif de transmission de paquet, un nœud de réseau, et un support de stockage. Le procédé fait appel aux étapes suivantes : un premier noeud de réseau recevant un premier paquet ; acquérant un premier identifiant à partir du premier paquet, le premier identifiant étant utilisé pour indiquer que le premier paquet comprend une exigence sensible au retard ; lors de l'acquisition du premier identifiant, configurant le premier paquet dans une file d'attente spécifique, la file d'attente spécifique étant au moins utilisée pour mettre en mémoire tampon des paquets sensibles au retard à envoyer ; déterminant un débit sortant de paquet de la file d'attente spécifique au moyen d'un débit entrant de paquet de la file d'attente spécifique ; et formant la file d'attente spécifique sur la base du débit sortant déterminé, et envoyant le premier paquet, le premier noeud de réseau étant un noeud d'acheminement dans un réseau.
PCT/CN2021/079756 2020-03-09 2021-03-09 Dispositif et procédé de transmission de paquet, nœud de réseau, et support de stockage WO2021180073A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010157089.8A CN113382442B (zh) 2020-03-09 2020-03-09 报文传输方法、装置、网络节点及存储介质
CN202010157089.8 2020-03-09

Publications (1)

Publication Number Publication Date
WO2021180073A1 true WO2021180073A1 (fr) 2021-09-16

Family

ID=77568384

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079756 WO2021180073A1 (fr) 2020-03-09 2021-03-09 Dispositif et procédé de transmission de paquet, nœud de réseau, et support de stockage

Country Status (2)

Country Link
CN (1) CN113382442B (fr)
WO (1) WO2021180073A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866453A (zh) * 2022-05-18 2022-08-05 中电信数智科技有限公司 一种基于G-SRv6协议的报文转发方法及系统
TWI783827B (zh) * 2021-12-15 2022-11-11 瑞昱半導體股份有限公司 無線網路裝置
WO2023130743A1 (fr) * 2022-01-10 2023-07-13 中兴通讯股份有限公司 Procédé de calcul de trajet, nœud, support de stockage et produit-programme d'ordinateur
WO2023155802A1 (fr) * 2022-02-15 2023-08-24 大唐移动通信设备有限公司 Procédé de programmation de données, appareil, dispositif, et support de stockage
WO2024016327A1 (fr) * 2022-07-22 2024-01-25 新华三技术有限公司 Transmission de paquets

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115941484A (zh) * 2021-09-30 2023-04-07 中兴通讯股份有限公司 网络架构、网络通信方法、电子设备及存储介质
WO2023065283A1 (fr) * 2021-10-22 2023-04-27 Nokia Shanghai Bell Co., Ltd. Amélioration de ran tenant compte du comportement de cbs dans une tsc
CN116264567A (zh) * 2021-12-14 2023-06-16 中兴通讯股份有限公司 报文调度方法、网络设备及计算机可读存储介质
CN114257559B (zh) * 2021-12-20 2023-08-18 锐捷网络股份有限公司 一种数据报文的转发方法及装置
EP4336795A1 (fr) * 2021-12-29 2024-03-13 New H3C Technologies Co., Ltd. Procédé de transmission de message et dispositif réseau
CN114726805B (zh) * 2022-03-28 2023-11-03 新华三技术有限公司 一种报文处理方法及装置
CN117897936A (zh) * 2022-08-16 2024-04-16 新华三技术有限公司 一种报文转发方法及装置
CN115086238B (zh) * 2022-08-23 2022-11-22 中国人民解放军国防科技大学 一种tsn网络端口输出调度装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6760309B1 (en) * 2000-03-28 2004-07-06 3Com Corporation Method of dynamic prioritization of time sensitive packets over a packet based network
CN103716255A (zh) * 2012-09-29 2014-04-09 华为技术有限公司 报文处理的方法与装置
CN110290072A (zh) * 2018-03-19 2019-09-27 华为技术有限公司 流量控制方法、装置、网络设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109981457B (zh) * 2017-12-27 2021-09-07 华为技术有限公司 一种报文处理的方法、网络节点和系统
CN114095422A (zh) * 2018-03-29 2022-02-25 华为技术有限公司 一种报文发送的方法、网络节点和系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6760309B1 (en) * 2000-03-28 2004-07-06 3Com Corporation Method of dynamic prioritization of time sensitive packets over a packet based network
CN103716255A (zh) * 2012-09-29 2014-04-09 华为技术有限公司 报文处理的方法与装置
CN110290072A (zh) * 2018-03-19 2019-09-27 华为技术有限公司 流量控制方法、装置、网络设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NOKIA, NOKIA SHANGHAI BELL: "Time Sensitive Networking", 3GPP DRAFT; R3-185958 TSN, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG3, no. Chengdu, China; 20181008 - 20181012, 29 September 2018 (2018-09-29), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP051529226 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI783827B (zh) * 2021-12-15 2022-11-11 瑞昱半導體股份有限公司 無線網路裝置
WO2023130743A1 (fr) * 2022-01-10 2023-07-13 中兴通讯股份有限公司 Procédé de calcul de trajet, nœud, support de stockage et produit-programme d'ordinateur
WO2023155802A1 (fr) * 2022-02-15 2023-08-24 大唐移动通信设备有限公司 Procédé de programmation de données, appareil, dispositif, et support de stockage
CN114866453A (zh) * 2022-05-18 2022-08-05 中电信数智科技有限公司 一种基于G-SRv6协议的报文转发方法及系统
CN114866453B (zh) * 2022-05-18 2024-01-19 中电信数智科技有限公司 一种基于G-SRv6协议的报文转发方法及系统
WO2024016327A1 (fr) * 2022-07-22 2024-01-25 新华三技术有限公司 Transmission de paquets

Also Published As

Publication number Publication date
CN113382442B (zh) 2023-01-13
CN113382442A (zh) 2021-09-10

Similar Documents

Publication Publication Date Title
WO2021180073A1 (fr) Dispositif et procédé de transmission de paquet, nœud de réseau, et support de stockage
CN107786465B (zh) 一种用于处理低延迟业务流的方法和装置
US11706149B2 (en) Packet sending method, network node, and system
US11968111B2 (en) Packet scheduling method, scheduler, network device, and network system
EP2684321B1 (fr) Système de blocage de données pour réseaux
CN112994961B (zh) 传输质量检测方法及装置、系统、存储介质
US20210083970A1 (en) Packet Processing Method and Apparatus
WO2018149177A1 (fr) Procédé et appareil de traitement de paquets
US20210006502A1 (en) Flow control method and apparatus
WO2021227947A1 (fr) Procédé et dispositif de commande de réseau
WO2015055058A1 (fr) Procédé de génération d'entrée de transfert, nœud de transfert, et contrôleur
US11038799B2 (en) Per-flow queue management in a deterministic network switch based on deterministically transmitting newest-received packet instead of queued packet
EP3188419B1 (fr) Procédé et circuit de mémorisation et de transmission de paquets et dispositif
CN111092858B (zh) 报文处理方法、装置、器件和系统
Park et al. Worst-case analysis of ethernet AVB in automotive system
EP4336795A1 (fr) Procédé de transmission de message et dispositif réseau
CN115460651A (zh) 数据传输方法及装置、可读存储介质、终端
WO2023185662A1 (fr) Procédé de service déterministe pour réaliser une sensibilisation aux ressources sous-jacentes de réseau, et dispositif électronique et support de stockage lisible par ordinateur
WO2024016327A1 (fr) Transmission de paquets
WO2023241063A1 (fr) Procédé, dispositif et système de traitement de paquet, et support de stockage
WO2024051367A1 (fr) Procédé de transmission de paquets, dispositif de réseau et support d'enregistrement lisible
Cavalieri Estimating KNXnet/IP routing congestion
CN117439956A (zh) 报文处理方法、设备和存储介质
CN117014384A (zh) 一种报文传输方法以及报文转发设备
CN118075811A (zh) 报文处理方法、装置、中继节点、核心网设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21768061

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21768061

Country of ref document: EP

Kind code of ref document: A1