WO2024051367A1 - 分组报文传输方法、网络设备及可读存储介质 - Google Patents

分组报文传输方法、网络设备及可读存储介质 Download PDF

Info

Publication number
WO2024051367A1
WO2024051367A1 PCT/CN2023/108898 CN2023108898W WO2024051367A1 WO 2024051367 A1 WO2024051367 A1 WO 2024051367A1 CN 2023108898 W CN2023108898 W CN 2023108898W WO 2024051367 A1 WO2024051367 A1 WO 2024051367A1
Authority
WO
WIPO (PCT)
Prior art keywords
time slot
packet
frame
virtual time
virtual
Prior art date
Application number
PCT/CN2023/108898
Other languages
English (en)
French (fr)
Inventor
刘爱华
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2024051367A1 publication Critical patent/WO2024051367A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Definitions

  • This application belongs to the field of communication network technology, and specifically relates to a packet message transmission method, network equipment and readable storage media.
  • Packet networks use packet messages (commonly called Packets) as the basic forwarding unit. Based on the information in the header of the packet, the forwarding path and quality of service (Quality of Service, Qos) are determined.
  • the Qos system of traditional packet forwarding generally adopts differentiated services ( DiffServ) mode provides a limited degree of deterministic forwarding capability, especially for forwarding delay and jitter performance.
  • the deterministic capability of packet forwarding depends on the traffic arrival characteristics of the bearer service (Traffic Arrival Specification), packet forwarding network forwarding topology and restrictions. Conditions (Topology & Constrains) and the real-time status of the packet network, there are often instantaneous congestion or even packet loss during the forwarding process. Therefore, packet forwarding in the related art is considered to be a best-effort forwarding mode, which cannot provide deterministic delay and jitter forwarding.
  • Embodiments of the present application provide a packet message transmission method, network equipment and readable storage media, which can solve the problem of packet message forwarding that cannot provide deterministic delay and jitter.
  • a packet message transmission method is provided, which is executed by a first network device.
  • the method includes: obtaining at least one packet message to be sent; and configuring the at least one packet message to a first group virtual device. Slotted frames are sent.
  • a packet message transmission method is provided, which is executed by a third network device.
  • the method includes: receiving at least one packet message sent by the first network device through the first packet virtual time slot frame; based on the arrival time of the at least one packet message, sending the received at least one packet service flow in the form of a packet service flow Packet messages.
  • a network device in a third aspect, includes a processor and a memory.
  • the memory stores programs or instructions that can be run on the processor.
  • the program or instructions are implemented when executed by the processor. The steps of the method as described in the first aspect, or the steps of implementing the method as described in the second aspect.
  • the fourth aspect provides a readable storage medium.
  • Programs or instructions are stored on the readable storage medium.
  • the steps of the method described in the first aspect are implemented, or the steps of the method are implemented as described in the first aspect.
  • a chip in a fifth aspect, includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the method described in the first aspect. steps, or steps to implement the method described in the second aspect.
  • a computer program/program product is provided, the computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement the method described in the first aspect.
  • Figure 1 shows a schematic flow chart of a packet message transmission method according to an embodiment of the present application
  • FIG. 2 shows a schematic structural diagram of a PVSF in the embodiment of the present application
  • Figure 3 shows a schematic location diagram of a time slot overhead in the embodiment of the present application
  • Figure 4 shows a schematic diagram of time slot overhead information in an embodiment of the present application
  • Figure 5 shows a schematic diagram of the location of PVSF indication information in various packet messages in this embodiment of the present application
  • Figure 6 shows a schematic diagram of packet message caching in an embodiment of the present application
  • Figure 7 shows a schematic diagram of scheduling of a time slot queue in the embodiment of the present application.
  • Figure 8 shows a schematic diagram of scheduling of another time slot queue in the embodiment of the present application.
  • Figure 9 shows a schematic diagram of the mapping of multi-services and virtual time slot units in the embodiment of the present application.
  • Figure 10 shows a schematic flow chart of another packet message transmission method in an embodiment of the present application.
  • Figure 11 shows a schematic network diagram of packet message forwarding in an embodiment of the present application
  • Figure 12 shows a schematic structural diagram of a packet virtual time slot frame based on IPv6 packet messages in the embodiment of the present application
  • Figure 13 shows a schematic diagram of the location of the overhead information of a packet virtual time slot frame based on IPv6 packet messages in the embodiment of the present application
  • Figure 14 shows a schematic diagram of a time slot queue based on IPv6 packet messages in the embodiment of the present application
  • Figure 15 shows a schematic diagram of time slot queue scheduling based on IPv6 packet messages in the embodiment of the present application
  • Figure 16 shows a schematic diagram of multi-service scheduling based on IPv6 packet messages in this embodiment of the present application
  • Figure 17 shows a schematic structural diagram of a packet virtual time slot frame based on MPLS packet messages in the embodiment of the present application
  • Figure 18 shows a schematic diagram of the location of the overhead information of a packet virtual time slot frame based on MPLS packet messages in the embodiment of the present application
  • Figure 19 shows a schematic diagram of fixed-length fragmentation based on MPLS packet messages in this embodiment of the present application.
  • Figure 20 shows a schematic diagram of packet forwarding based on MPLS in an embodiment of the present application
  • Figure 21 shows a schematic structural diagram of a packet virtual time slot frame based on SRv6 packet messages in the embodiment of the present application
  • Figure 22 shows a schematic diagram of the location of the overhead information of a packet virtual time slot frame based on SRv6 packet messages in the embodiment of the present application
  • Figure 23 shows a schematic hardware structure diagram of a network-side device provided by an embodiment of the present application.
  • first, second, etc. in the description and claims of this application are used to distinguish similar objects and are not used to describe a specific order or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and that "first" and “second” are distinguished objects It is usually one type, and the number of objects is not limited.
  • the first object can be one or multiple.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the related objects are in an "or” relationship.
  • LTE Long Term Evolution
  • LTE-Advanced, LTE-A Long Term Evolution
  • LTE-A Long Term Evolution
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single-carrier Frequency Division Multiple Access
  • NR New Radio
  • Figure 1 shows a schematic flowchart of a packet message transmission method in an embodiment of the present application.
  • the method 100 can be executed by a first network device.
  • the method may be installed on the first network Software or hardware on the device to execute.
  • the method may include the following steps.
  • S112 Configure the at least one packet message to the first packet virtual time slot frame for transmission.
  • configuring the at least one group message to be sent in the first group virtual time slot frame can also be expressed as carrying the at least one group message to the first group virtual time slot frame for transmission, or , mapping the at least one packet message to the first packet virtual time slot frame for transmission, and so on.
  • the Packet-based Virtual Slotted Frame is a frame structure with time division multiplexing characteristics.
  • the Packet-based Virtual Slotted Frame can be sent periodically in a certain manner.
  • the deterministic forwarding capability under the packet architecture is achieved and the optimized Deterministic forwarding delay and jitter performance.
  • the first group virtual time slot frame may include at least one group virtual container using a time division multiplexing structure, and the group virtual container includes at least one virtual time slot unit, so The virtual time slot unit includes at least one minimum time slot unit, and one of the minimum time slot units is used to carry a packet information unit of the at least one packet message.
  • the packet virtual time slot frame may be composed of packet messages (Packet) (that is, the packet message is carried in the packet virtual time slot frame) and has time division multiplexing characteristics, that is, one or more PVSF frames are based on nominal
  • the fixed time interval reoccurs the meaning of "nominal” refers to the ideal situation. The actual situation will have an allowable deviation compared with the ideal situation.
  • the nominal fixed time interval of the PVSF frame structure typically exists on the Ethernet interface at +-100PPM. deviation).
  • the PVSF frame includes a Packet-based Virtual Container (PVC) that can be flexibly defined.
  • PVC Packet-based Virtual Container
  • the PVC has the characteristics of a time division multiplexing structure, that is, a PVSF is formed through a designated time division multiplexing structure to carry services with deterministic requirements.
  • the time-division multiplexing structure of the PVC container can be a hierarchical multiplexing structure that can be cyclically nested.
  • PVSF only includes one PVC.
  • PVC and PVSF are basically equivalent.
  • PVC can further include virtual slot units (Virtual Slotted Unit, VSU) that make up the PVC.
  • VSU Virtual Slotted Unit
  • VSU It consists of one or more packet messages belonging to it (that is, a VSU can carry one or more packet messages); a VSU contains one or more Minimum Slotted Units (MSU), and an MSU can be composed of
  • MSU Minimum Slotted Units
  • the slot overhead information is composed of packet header and payload.
  • the PVC only includes one VSU, in which case the PVC and the VSU are basically equivalent; in the most extreme scenario, one VSU only contains one packet information unit.
  • the group virtual time slot frame structure is formed through PVSF, PVC and VSU.
  • the at least one packet message obtained by the first network device may be at least one packet message obtained by the first network device sensing the packet service accessing the first network device, or may be obtained from the second network device.
  • At least one packet message obtained from the sent packet service flow may also be a packet message sent by other network devices through the second packet virtual time slot frame.
  • the first network device senses at least one packet message obtained by accessing the packet service of the first network device or at least one packet message obtained from the packet service flow sent by the second network device.
  • the first network device may serve as a packet entry node supporting PVSF, and the packet packets of the packet service flow sensed or received are encapsulated into PVSF for sending. For example, the first network device identifies the packet service flow sent by the second network device according to the network side interface NNI, or the first network device identifies the packet message through the service access PE service side interface UNI.
  • configuring the at least one packet message to be sent in the first group virtual time slot frame in S120 includes: statically or Dynamically configured to send to a target virtual time slot unit corresponding to the service-related information of the at least one packet message, wherein the target virtual time slot unit is a virtual time slot unit of the first group virtual time slot frame,
  • the service-related information includes at least one of the following: service type, service identifier, service priority, service required bandwidth, delay, jitter, and packet loss rate.
  • the first network device when the first network device obtains at least one packet message by sensing the packet service accessed by the first network device, then, in S120, the first network device can obtain at least one packet message according to the service type, service identifier, At least one of service priority, service required bandwidth, etc. will be reduced to one less
  • the mapping relationship between the packet message and the virtual time slot unit can be determined before S112.
  • the first network device may determine the mapping relationship between the packet message and the virtual time slot unit based on preset information.
  • the preset information may include at least one of the following: capability information of the first network device, service information carried by the first network device. requirements, network environment.
  • the first network device receives the packet service flow sent by the second network device in S110, obtains the at least one packet message from the packet service flow, and then, based on the arriving At least one of the delay, jitter, and packet loss rate of the packet message determines that at least one packet message is mapped to the target virtual time slot unit. For example, based on the delay and jitter of at least one packet message, it is determined how many packet messages need to be buffered before sending, and then based on the number of packet messages that need to be buffered, the corresponding target time slot unit is determined.
  • dynamic configuration may also be used to configure the at least one packet message to be sent in the first packet virtual time slot frame.
  • configuring the at least one packet message to be sent in the first group virtual time slot frame may include: configuring the at least one group message to the target of the first group virtual time slot frame.
  • the virtual time slot unit is sent, wherein the target virtual time slot unit is the nearest available virtual time slot unit to be sent.
  • the first packet virtual time slot frame is currently being sent to VSU1
  • the subsequent ones to be sent include VSU2 and VSU3.
  • the maximum total number of bytes allowed by VSU2 is 50M
  • the maximum number of bytes allowed by VSU3 is 50M.
  • the maximum total number of bytes is 100M. If the number of packets to be sent is 30M, the most recently available time slot unit is VSU2. If the number of packets to be sent is 80M, the most recently available time slot unit is VSU3. Through this possible implementation, the delay of packet messages can be reduced.
  • the A packet virtual time slot frame may have frame overhead information, which may include a frame structure indicator and a frame identifier.
  • the frame structure indicator may include a frame start indicator, a frame end indicator, and a frame continuation indicator.
  • a frame identifier used to identify the first packet virtual slot frame.
  • the PVSF frame identifier ID can take values of 0 and 1.
  • configuring the at least one packet message to be sent in the first packet virtual slot frame may include the following steps.
  • Step 1 Carry the time slot overhead of the target virtual time slot unit in a predetermined position of the at least one packet message, where the time slot overhead includes frame overhead information of the first group virtual time slot frame,
  • the frame overhead information includes: a frame structure indicator and a frame identifier.
  • Step 2 Configure at least one packet message carrying the time slot overhead to be sent in the target virtual time slot unit.
  • the predetermined position may be a conveniently identifiable position such as the header of the packet message, the header of the payload area, or the tail of the payload area.
  • the frame overhead information may also include at least one of the following: service-related information of the at least one packet message, maintenance diagnostic information, management information, and an indication of the packet virtual container to which the target virtual time slot unit belongs. information, indication information of the target virtual time slot unit.
  • maintenance diagnostic information can provide overhead information such as path-associated OAM diagnosis, protection switching, and lossless adjustment, support diagnosis of bit errors, delay and other performance measurements of virtual packet slot frame forwarding, and transfer of APS protocol and lossless adjustment instructions.
  • the PVSF may have a frame start indicator, an optional frame end indicator, and optional frame overhead information.
  • PVC can have container time division multiplexing indication information and optional container overhead information.
  • the PVSF frame start indicator can be actively inserted into the indication message at nominal time, or it can also be carried in the VSU overhead at the same time.
  • Overhead information such as PVC container indication information can be carried in actively inserted overhead messages, or can be carried in the packet header of the home PVC.
  • VSU timeslot indication information and other overhead information can be carried in actively inserted overhead packets, or Carried along with the service VSU message.
  • configuring at least one packet message carrying the time slot overhead to be sent in the target virtual time slot unit may include: a packet message in the at least one packet message.
  • the first message formed by adding the time slot overhead to the service data is configured to be sent in a minimum time slot unit of the target virtual time slot unit. That is to say, the MSU in which the VSU carries the service may be a VSU message formed by adding time slot overhead to the original service.
  • configuring at least one packet message carrying the time slot overhead to be sent in the target virtual time slot unit may include: converting the service data of the at least one packet message according to a predetermined
  • the second message formed after the length is converted is configured to be sent in a minimum slot unit of the target virtual time slot unit, and the predetermined length is not more than the minimum slot unit.
  • the size of the data carried. That is to say, the MSU carrying the VSU service can also be a message formed after the conversion of the original service message (such as fragmentation and reassembly of fixed-length packet information units, filling of idle information, or multi-service encapsulation and aggregation). wait).
  • the time slot overhead includes but is not limited to the start signal and end signal of PVSF/PVC/VSU/MSU, related information of the bearer service (service identification, type), and related information of the container (container identification, container Inner time slot identification), maintenance diagnostic information (OAM) and other auxiliary information, etc., as shown in Figure 4.
  • the above information can be reduced through coding design to reduce unnecessary indication information, or through a TDM-like multiframe overhead pattern to reduce the information carried in each packet message.
  • the time slot overhead (can also be called PVSF indication information) for the packet L2 layer Ethernet packet message can be defined in the media access control layer (Medium Access Control, MAC) frame header, frame end or Other extension areas, for packet multi-protocol label switching (MPLS) packet messages, can be defined in one of the MPLS label stack, the extension area between the label stack and the payload, or the tail of the message.
  • MPLS packet multi-protocol label switching
  • the time slot overhead can be carried in one of the following locations: the IP header area of the packet message, the hop-by-hop extension header (HBH), and the destination extension. Head (DOH) area, expandable selection (Opitons) area.
  • the time slot overhead can carry the routing list or the extension area of the segment routing; for packet L3 layer Ethernet packet messages, the time slot overhead can be in the IPv4/IPv6 frame header and frame tail. Or other extension headers (including HBH, DOH, SRH, etc.) and extension areas; for packet L4 layer Ethernet packet messages, the time slot overhead can be placed in the transport layer or application layer encapsulation or other extended information area. In order to provide encapsulation efficiency, the above information can be specifically implemented through compact encoding or compression encoding.
  • the method may further include: Based on the preset information, determine at least one of the following.
  • the PVSF frame structure is defined based on IPv6 packets and the PVSF frame structure is designed to be composed of 1 PVC. If PVSF multi-path aggregation is not considered, additional PVSF frame ID identification is not needed. In this case, the PVC identification can be combined with the PVSF identification.
  • Each PVC defines 10 VSUs. Each VSU supports the transmission of a packet message with a maximum length of 1000 Byte. The nominal VSU time slot size is 10us.
  • the new service ID range determined by the bearer is 16 bits, which can be flexibly allocated according to the bearer service conditions.
  • the mapping relationship between the packet message and the virtual time slot unit of the first group virtual time slot frame includes the corresponding relationship between the service-related information of the packet message and the virtual time slot unit. For example, regarding the correspondence between the service identifier of the packet message and the virtual time slot unit, packet service 1 corresponds to VSU1, that is, the packet message of packet service 1 is fixedly mapped to VSU1.
  • the sending method of the first group virtual time slot frame, the sending method of the first group virtual time slot frame includes: the number n of group virtual time slot frames sent continuously, two groups of n packets sent continuously The interval between virtual slot frames. For example, every two PVSF frames form a multiframe cycle, and the interval between every two PVSF frames is 3ms.
  • the first network device senses the packet service accessed by the first network device or receives the packet service flow sent by the second network device to obtain the at least one packet message.
  • configure the at least one packet message to The first packet virtual slot frame transmission may include the following steps.
  • Step 1 Buffer the at least one packet message into a time slot queue, where the structure of the time slot queue corresponds to the structure of the first group virtual time slot frame.
  • Step 2 When the number of packet messages cached in the time slot queue reaches the first threshold, obtain the group messages in the time slot queue, and configure the obtained group messages in the first Packet virtual slot frames are sent.
  • the first network device may cache at least one acquired packet message into the time slot queue until the number of cached packet messages in the time slot queue reaches the first threshold, and then cache the obtained packet message from the time slot queue.
  • the packet message is obtained from the queue and configured to be sent in the first packet virtual time slot frame.
  • the first threshold is determined based on a first delay and a first jitter, wherein the first delay is the maximum value of the delays of the at least one packet message, and the first jitter is The maximum value of the jitter of the at least one packet message.
  • the first network device obtains packet messages by receiving the packet service flow sent by the second network device, it can determine the number of packet messages that need to be cached based on the maximum delay and maximum jitter of the multiple arriving packet messages. quantity.
  • the first network device serves as a forwarding node of the packet message and schedules the packet message sent through the second packet virtual time slot frame to be sent on the first group virtual time slot frame.
  • obtaining at least one packet message to be sent includes the following steps.
  • Step 1 Receive the frame start indicator of the second group virtual time slot frame or the first group message representing the frame header position of the second group virtual time slot frame.
  • the frame head position of the second group virtual time slot frame can be indicated by carrying a frame start indicator in the packet message, or the first packet representing the frame head position of the second group virtual time slot frame can be used.
  • the message indicates the frame header position of the second packet virtual time slot frame.
  • Step 2 Starting from the frame start indicator or the first packet message, cache the received packet message into an idle time slot queue until the preset conditions are met, wherein the time slot queue
  • the structure corresponds to the structure of the second packet virtual time slot frame, and the preset condition includes one of the following: receiving a correct end of frame indicator; the time slot queue is full; receiving a message expected to represent a new frame The second packet message at the frame header position; the frame start indicator of the expected new frame is received.
  • the received packet message is discarded.
  • the method may further include: according to the second group virtual time slot frame The first arrival time of the frame header position of the time slot frame, the second arrival time of the frame header position of the next group virtual time slot frame after the second group virtual time slot frame is estimated, and at the second arrival time Within a preset range of time, the packet message at the frame header position of the next packet virtual time slot frame is detected.
  • the first arrival time may be a frame start indicator or the arrival time of the first packet message.
  • the first network device can detect the packet message at the frame header position of the next packet virtual time slot frame within the preset range of the second arrival time, thereby avoiding the need for the first network device to always detect the next packet virtual time slot frame. Whether the time slot frame arrives, saving energy consumption of the first network device.
  • the PVSF design has a nominal arrival time, but considering the inherent frequency deviation or packet transmission and reception jitter in the packet network, there is a virtual frame fixing mechanism at the arrival time of the beginning of the PVSF frame to maintain the PVSF slotted structure. Therefore, the above possible implementation In the method, the next packet virtual time slot frame is detected within a preset range of the second arrival time.
  • the method may also include : The arrival of the packet message at the frame header position of the next packet virtual time slot frame is not detected within the preset range of the second arrival time, or, within the preset range of the second arrival time
  • the next packet virtual time slot frame is abnormal.
  • the time slot queue may include: a grouped virtual time slot frame queue; the time slot queue further includes one of the following: at least one of the grouped virtual time slot frame queues A group virtual container queue and at least one virtual time slot unit queue subordinate to the group virtual container queue, wherein group messages carried by different group virtual containers are cached in the corresponding group virtual container queue, and different virtual time slot units carry The packet messages are buffered into the corresponding virtual time slot unit queue.
  • the size of the time slot queue may vary from the minimum queue depth to the maximum PVSF frame length (including necessary tolerable isolation gaps).
  • caching the received packet message into an idle time slot queue may include: caching the received packet message according to the frame overhead information carried in the received packet message. to the position corresponding to the frame overhead information in the time slot queue.
  • the first network device can determine the packet virtual container queue and the virtual time slot unit queue for each packet message cache according to the time slot cost information of at least one packet message.
  • the time slot queue receives (enqueues) a packet message that supports PVSF belonging.
  • the start signal indicates that the PVSF packet message contained in the signal indicates to be received into an idle time slot queue until the end of frame signal (if any) is received, or the queue is full, or a new frame start signal arrives, such as As shown in Figure 6.
  • the SQ queue structure can have independent queues or logical queue structures (shared physical queues) for PVSF, PVC and VSU. If there is an independent PVC queue and the PVC queue is subordinate to the PVSF, different PVC packets enter the PVC queue according to its container ID; similarly, if there is a VSU queue, the packets of different VSUs enter the VSU queue according to their timeslot ID. , the VSU queue is subordinate to the PVC. If a logical queue structure is adopted, PVC and VSU share PVSF queue resources and are distinguished by the identification information carried in the packet packets. Different PVSF, PVC and VSU according to their The identification information of the frame structure determines the relative position, and the packets in the same VSU can be queued in order according to the set policy.
  • configuring the at least one packet message to be sent in the first group virtual time slot frame may include : When the number of buffered packet messages in the time slot queue reaches the second threshold, schedule the packet messages in the time slot queue to be sent in the first packet virtual time slot frame.
  • the second threshold is less than or equal to a preset value
  • the preset value is the frame length of one of the first group virtual time slot frames or the maximum word allowed for one of the first group virtual time slot frames. Total number of sections.
  • the first network device can set an appropriate second threshold according to the performance of the packet network. For example, it can be set in combination with the jitter tolerance performance.
  • the second threshold can be set according to the time when the packet virtual time slot frame is received or equivalently. Indicators (such as the corresponding number of cached frames or bytes) settings.
  • the second threshold can usually be set according to the system forwarding overhead to minimize the forwarding delay caused by queue waiting.
  • the phase difference between the first frame starting position of the first group virtual time slot frame and the second frame starting position of the second grouping virtual time slot frame is not greater than a preset threshold.
  • the outbound nominal frame start position of the PVSF outbound node can be loosely phase-aligned with the inbound, and the phase difference between the outbound and inbound can be matched by adding buffers.
  • the forwarding delay and jitter have deterministic upper and lower bounds.
  • the relative nominal frequencies of the incoming and outgoing PVSF frame structures meet the preset requirements (for example, within the range of Ethernet +-100ppm).
  • the relative nominal phase (the position of the PVSF frame header) does not strictly require synchronization.
  • the nominal phases of the incoming and outgoing directions do not strictly require synchronization.
  • the deviation will exceed 1/2 of the transmission time T f of the packet virtual slot frame.
  • the scheduling of sending packet messages in the time slot queue to the first group virtual time slot frame may include: according to the first group to which the packet message in the time slot queue belongs. a virtual container and/or a first packet slot unit, scheduling the packet message to a second packet virtual container and/or a second packet slot unit of the first packet virtual slot frame, wherein the one The packet virtual container and/or the first component slot unit have an exchange relationship with the second packet virtual container and/or the second packet slot unit.
  • the first group virtual time slot frame and the second group virtual time slot frame can be scheduled according to a statically configured time slot exchange table and according to PVSF/PVC/VSU frame structure information, for example, according to The PVC/VSU exchange relationship between the inbound PVSF and the outbound PVSF configured on the control plane implements slotted scheduling from the inbound PVC/VSU to the occurrence of the PVC/VSU.
  • the scheduling of packet messages in the time slot queue to be sent in the first group virtual time slot frame includes: according to different target group messages, in the second group virtual time slot frame.
  • the relative position in the time slot queue is scheduled to the first group virtual time slot frame for transmission, wherein different target group messages are in the first group virtual time slot frame.
  • the relative position of is the same as the relative position in the second packet virtual time slot frame, and the destination address of the different target packet messages is the same. That is to say, in this possible implementation, the relative order of the packets is determined based on the PVSF/PVC/VSU frame structure information, and the scheduling still performs packet forwarding based on the packet headers.
  • the existing mechanism can be used for packet forwarding. Including MAC, MPLS, IP, SR, etc., because the packets have been sorted according to the PVSF frame structure information when entering the queue, therefore, they can also be sent according to the expected PVSF frame structure during outbound scheduling, as shown in Figure 7.
  • the second group virtual time slot frame and the first group virtual time slot frame are aligned with the same synchronization reference, or the second group virtual time slot frame is aligned with the first group virtual time slot frame.
  • slotted scheduling does not strictly require strict synchronization of the nominal phase of the incoming PVSF and the emerging PVSF, but can maintain a relatively stable mapping relationship between the incoming frame structure and the outgoing frame structure, maintaining The mapping relationship method can be implemented through clock synchronization, aligning to the same synchronization reference, and maintaining a virtual clock with a relative fixed deviation. The deviation of the mapping relationship is tolerated by the above-mentioned second threshold. If the phase is synchronized, zero queuing delay scheduling can be achieved.
  • the same PVSF frame structure and the same number of packets or total bytes contained in the frame structure are maintained.
  • corresponding queue and scheduling rate adaptation processing can be performed.
  • multiple PVSFs may be aggregated to the first network device. Therefore, in this possible implementation, the packet messages in the time slot queue are scheduled to the first packet virtual time slot frame. Sending may include: when the number of buffered packet messages in the plurality of time slot queues reaches the second threshold, using an intermittent rotation scheduling method to schedule the packet messages in the plurality of time slot queues. to the first packet virtual time slot frame transmission, wherein the interleaved rotation scheduling method is to sequentially schedule the first cache unit of each of the time slot queue caches, and then sequentially schedule the first cache unit of each of the time slot queue caches The next cache unit until the cache units cached in each of the time slot queues are scheduled, one cache unit caches the packet message carried by one virtual time slot unit.
  • the PVSF/PVC/VSU interleaving mechanism based on the PVSF frame structure can be used to achieve smooth, low-latency and low-jitter aggregation, as shown in Figure 8.
  • different service packets within the VSU can also be further improved in forwarding delay and jitter performance through packet generalized processor sharing (PGPS), as shown in Figure 9.
  • PGPS packet generalized processor sharing
  • PVSF/PVC/VSU can also support simultaneous slot structure traffic aggregation to achieve aggregation.
  • the time slot queue flows into and out of the queue according to the PVSF message. Since the SQ operation object is the PVSF packet message, the outbound sending rate is greater than the inbound sending rate (multiple PVSF reception and then convergence). For multiple PVSF transmits) or equal to (inbound 1 PVSF receive then outbound 1 PVSF transmit) the inbound receive rate ensures that the slot queue is not congested. The forwarding of PVSF from inbound to outbound is completed by the slot scheduling function.
  • the The size of each virtual time slot unit of the first group virtual time slot frame is not exactly the same as the size of the second group virtual time slot frame, and the frame structure of the first group virtual time slot frame is the same as that of the second group virtual time slot frame.
  • the virtual time slot frames have the same frame structure, and the number of packet messages or the total number of bytes allowed in the first group virtual time slot frame is the same as the number of group messages or the total number of bytes allowed in the second group virtual time slot frame.
  • the virtual time slot unit size of the first group virtual time slot frame can be reduced accordingly. If the egress rate of the time slot queue decreases compared to the ingress rate, the first The virtual time slot unit size of the grouped virtual time slot frame may be correspondingly larger, but the frame structure of the first grouped virtual time slot frame is the same as the frame structure of the second grouped virtual time slot frame. Through this optional implementation, the forwarding delay of packet messages can be reduced.
  • the deterministic forwarding capability under the packet architecture is achieved and the optimized Deterministic forwarding delay and jitter performance.
  • this embodiment of the present application also provides another packet message transmission method, which is executed by a third network device.
  • Figure 10 shows a schematic flowchart of another packet message transmission method provided by an embodiment of the present application, as follows: As shown in Figure 10, the method 1000 mainly includes the following steps.
  • S1010 Receive at least one packet message sent by the first network device through the first packet virtual time slot frame.
  • the first network device may configure at least one packet message to be sent on the first packet virtual time slot frame in the manner described in the above method 100. For details, please refer to the relevant description in the above method 100, which will not be described again here. .
  • S1020 Based on the arrival time of the at least one packet message, send the received at least one packet message in the form of a packet service flow.
  • the third network device When sending the at least one packet message in the form of a packet service flow, the third network device also Grouped messages can be sent in groups. For example, the third network device can obtain the destination address of each packet message by parsing the message header of each packet message, and send the packet messages with the same destination address through the same packet service flow.
  • the third network device may also cache arriving packet messages until the cached packet messages reach a predetermined threshold, and then send the packet messages in the form of a packet service flow.
  • PVSF border node 1 and PVSF border node 2 can use the above-mentioned first network device.
  • the PVSF border node performs boundary encapsulation and conversion on the packet message with PVSF and then sends it.
  • the packet message reaches the forwarder in the ordinary packet network.
  • Node for example, the above-mentioned third network device
  • the third network device forwards the ordinary packet message, that is, forwards the packet service flow.
  • the PVSF border node 2 caches the arriving packet message and rebuilds the PVSF frame queue. , forwarding packet messages through PVSF frames.
  • IPv6 only forwards unrecognized extension headers based on the DA of IPv6, traversing Ordinary packet networks only introduce large delays or jitters within the defined PVSF frame structure.
  • the maximum delay or jitter of the packet network is 200us
  • the maximum "rebuilding PVSF frame queue" of the PVSF border node will be increased to 200us. caching capabilities. Factors such as possible congestion and packet loss across ordinary packet networks are not considered in this embodiment and can be solved through reliability enhancement technology.
  • a certain "border encapsulation conversion" function needs to be performed on the sending side of the border node crossing the ordinary packet network, similar to other border nodes of packet equipment.
  • the encapsulation conversion function of the PVSF border node can be simply processed by encapsulating another layer outside the PVSF extended message header to traverse the normal After the traversal is completed, the added outer layer header can be stripped off to continue the subsequent PVSF forwarding function.
  • Step 1 For the PVSF entry node (i.e., the sensing node of the packet service or the PVSF boundary node), for example, the PE node, you can first define the PVSF frame structure based on the IPv6 packet message, and every 2 PVSF frames form a multiframe cycle, that is, the IP
  • the PVSF frame ID values under the sub-interface (which can be identified by IP address) are 0 and 1.
  • the designed PVSF frame structure consists of 1 PVC. If PVSF multi-channel aggregation is not considered, there is no need for additional PVSF frame ID identification. In this case, the PVC identification can be combined with the PVSF identification.
  • Each PVC defines 10 VSUs, and each VSU supports A packet message with a maximum length of 1000Byte is sent, and the VSU nominal time slot size is 10us.
  • the determined new service ID range of the bearer is 16 bits, which can be flexibly allocated according to the bearer service conditions.
  • the frame structure is shown in Figure 12.
  • Step 2 The PVSF entry node designs the PVSF overhead information (Overhead) based on the IPv6 HBH extension header, including the PVSF start signal indicator 2 bits (00 indicates the start of the PVSF frame, 10 and 01 indicate the frame continuation, and 11 indicates the end of the frame); PVSF/PVC ID is represented by 1 bit, 0 and 1 alternately cycle; VSU ID is represented by 4 bits, the available range is 0-9, and the rest are invalid values. Combined with PVSF/PVC ID, there can be 20 distinguishable VSUs; 16 bits are designed to represent the service ID. Different services can be flexibly accessed to different VSU time slots; finally, there can be a CRC8 check for PVSF overhead information, and its overhead design is shown in Figure 13.
  • the PVSF start signal indicator 2 bits 00 indicates the start of the PVSF frame, 10 and 01 indicate the frame continuation, and 11 indicates the end of the frame
  • PVSF/PVC ID is represented by 1 bit, 0 and 1 alternately cycle
  • VSU ID is represented by 4 bits, the available
  • VSU opportunity guaranteed in each PVSF Provided to service 1 when the traffic of service 1 arrives, the most recently available VSU is dynamically allocated to carry the service. For simplicity, this embodiment assumes that a static fixed mapping scheme is used. If the PVSF is idle, the PE node can actively insert a positioning message carrying the PVSF frame start information to maintain a relatively stable phase relationship of the PVSF. Similarly, when the PE node has multi-service access, it provides multi-service access according to the PVSF time slot allocation and mapping algorithm.
  • Step 4 The PVSF forwarding node (that is, the forwarding node that supports the PVSF function) designs the FVSF time slot queue and time slot scheduling functions based on the current packet queue and scheduling mechanism.
  • two PVSF time slot queues can be designed for each IP sub-interface.
  • the queue can provide indexes of different VSUs to support VSU-based packet scheduling. .
  • a node that supports the PVSF function receives the first packet carrying the PVSF frame start indication, it puts the packet and subsequent packets belonging to the same PVSF into the idle time slot queue, as shown in Figure 14.
  • Step 5 The node that supports the PVSF function performs slot scheduling for the messages in the PVSF slot queue.
  • the scheduling can be scheduled using a statically configured slot exchange table (that is, the incoming slot and the emerging slot are statically swapped according to the VSU ID) ; Route forwarding scheduling based on IPv6 destination address may also be used; for simplicity, this embodiment takes statically configured time slot exchange table scheduling as an example.
  • the result of scheduling is that the PVSF frame structure is still maintained in the outbound direction. Due to the static configuration of the time slot exchange table, the outbound direction also needs to cache a maximum of one PVSF frame length, as shown in Figure 15.
  • Step 6 For possible multi-channel PVSF frame aggregation scenarios, the egress provides multi-channel PVSF aggregation bandwidth capabilities based on the supported aggregation capabilities. In order to distinguish multi-channel aggregation of different PVSF frame signals, it is necessary to support multi-channel PVSF frame aggregation scenarios. Add PVSF ID below to distinguish different aggregated PVSF signals. By scheduling the arriving VSU packets in the inbound multi-channel PVSF queue in turn according to the outbound PVSF frame start signal, the effect of interleaved transmission according to the VSU packets during the aggregation of multi-channel PVSF frames is achieved, as shown in Figure 16 Show.
  • Step 1 PVSF entry node (that is, the sensing node of the packet service or the PVSF boundary node) base
  • the PVSF frame structure is defined in MPLS packets. Every 4 PVSF frames form a multiframe cycle, that is, the PVSF frame ID value under the MPLS sub-interface is 0-3.
  • the designed PVSF frame structure is composed of 4 PVCs.
  • the PVC numbers in each PVSF frame are 0-1.
  • Each PVC defines 4 VSUs and the VSU numbers are 0-3.
  • Each VSU supports a packet report with a fixed 128Byte payload. For text transmission, the total PVSF packet bandwidth is allocated to a fixed 10Gbps.
  • Deterministic services can request 1 to more The resources of a VSU unit.
  • the size of the packet unit in the fixed VSU shown in this embodiment reflects the simulated circuit switching characteristics.
  • the determined new service ID range of the bearer has a value of 16 bits, which can be flexibly allocated according to the bearer service conditions.
  • the frame structure is shown in Figure 17.
  • Step 2 The PVSF ingress node designs the PVSF overhead information based on the MPLS label (Label), including the PVSF start signal indicator 2 bits (00 indicates the start of the PVSF frame, 10 and 01 indicate the frame continuation, and 11 indicates the end of the frame); the PVSF ID is represented by 2 bits , with a value of 0-3; PVC ID is represented by 2 bits, with a value of 0-3; VSU ID is represented with 2 bits, and the available range is 0-3. Combined with PVSF/PVC ID, there can be 64 distinguishable VSU units; for compatibility
  • the MPLS Label S indication information TTL field is designed with 15 bits representing the service ID. Different services can be flexibly accessed to different VSU time slots.
  • the overhead design is shown in Figure 18.
  • VSU units that is, one VSU opportunity is guaranteed to be provided to service 1 in each PVSF.
  • the traffic of service 1 arrives, the most recently available VSU is dynamically allocated to carry the service.
  • this embodiment assumes a static fixed mapping scheme. If PVSF is idle, PE Nodes can actively insert a positioning message carrying PVSF frame start information to maintain a relatively stable phase relationship of PVSF. Similarly, when the PE node has multi-service access, it provides multi-service access according to the PVSF time slot allocation and mapping algorithm.
  • Step 4 Due to the design of a fixed payload size, that is, a fixed VSU packet unit, the PE node adds fragmentation and reassembly processing for mapping variable-length MAC/IP services to fixed-length VSU packet units.
  • the fragmentation and reassembly scheme can continue to use the fixed-length message. Meta-sharding and reorganization process.
  • the different processing is that the PVSF frame overhead information is still maintained after the fixed-length cell is fragmented, as shown in Figure 19.
  • Step 5 The PVSF forwarding node (that is, the forwarding node that supports the PVSF function) designs the PVSF time slot queue and time slot scheduling functions based on the current packet queue and scheduling mechanism.
  • two PVSF time slot queues can be designed for each MPLS sub-interface.
  • the queue can provide indexes of different VSUs to support VSU-based packet scheduling. .
  • a node that supports the PVSF function receives the first packet carrying the PVSF frame start indication, it puts the packet and subsequent packets belonging to the same PVSF into the idle time slot queue.
  • Step 6 The node that supports the PVSF function performs slot scheduling for the messages in the PVSF slot queue.
  • the scheduling can be scheduled using a statically configured slot exchange table (that is, the incoming slot and the emerging slot are statically swapped according to the VSU ID) ; Routing forwarding scheduling based on IPv6 destination address may also be used; for simplicity, this embodiment takes statically configured MPLS label switching scheduling as an example.
  • the result of scheduling is that the PVSF frame structure is still maintained in the outbound direction. Due to the static configuration of the time slot exchange table, the outbound direction also needs to cache a maximum of one PVSF frame length, as shown in Figure 20.
  • Step 1 The PVSF entry node (that is, the sensing node of the packet service or the PVSF border node) defines the PVSF frame structure based on the SRv6 packet message. Taking the simplest scenario as an example, every PVSF single frame is cycled, that is, under the SRv6 sub-interface The PVSF frame ID value is fixed to 0 or can be omitted without explicit carrying.
  • the designed PVSF frame structure consists of 1 PVC. If PVSF multi-channel aggregation is not considered, there is no need for additional PVSF frame ID identification. At this time, the PVC identification is also fixed at 0 or can be omitted. There is no need to carry it explicitly. Each PVC defines 2 VSUs.
  • Each VSU supports the transmission of a packet message with a maximum length of 2000 Byte.
  • the nominal VSU time slot size is 20us.
  • the determined new service ID range of the bearer has a value of 16 bits, which can be flexibly allocated according to the bearer service conditions.
  • the frame structure is shown in Figure 21.
  • Step 2 The PVSF entry node designs PVSF overhead information based on SRv6SID, including PVSF start signal indicator 2 bits (00 indicates the start of the PVSF frame, 10 and 01 indicate the frame continuation, and 11 indicates the end of the frame); PVSF/PVC ID in this embodiment are omitted; VSU ID is represented by 1 bit, and the available range is 0-1, so this embodiment only has two distinguishable VSUs, reflecting the characteristics of large bandwidth; its overhead design is shown in Figure 22.
  • PVSF start signal indicator 2 bits 00 indicates the start of the PVSF frame, 10 and 01 indicate the frame continuation, and 11 indicates the end of the frame
  • PVSF/PVC ID in this embodiment are omitted
  • VSU ID is represented by 1 bit, and the available range is 0-1, so this embodiment only has two distinguishable VSUs, reflecting the characteristics of large bandwidth; its overhead design is shown in Figure 22.
  • Step 5 In order to take advantage of SRv6 in specifying the forwarding path through SRH at the source node, the PVSF forwarding node can carry the end-to-end slotted scheduling information in the SID overhead information of the SRH for transmission, as shown in the overhead format in Figure 22.
  • a new SRv6 programmable Function can be extended.
  • it is called a PVSF Function.
  • the function of this Function is to perform slotted scheduling forwarding.
  • the control plane forms a PVSF slotted scheduling and forwarding table locally on the node, that is, the scheduling and forwarding relationship between the incoming VSU slot and the appearing VSU slot.
  • the time slot forwarding is completed to the outbound PVSF VSU information; at the same time, the next hop is obtained from the SRH
  • the SID that needs to be processed by the SRv6 PVSF node is updated to the new DA to complete the SRv6 programmable forwarding processing.
  • An embodiment of the present application also provides a network device, including a processor and a communication interface.
  • the processor is used to implement the above-mentioned packet message transmission method 100 or the packet message transmission method 1000.
  • the communication interface is used for communication with external devices.
  • This network device embodiment corresponds to the above-mentioned network device method embodiment.
  • Each implementation process and implementation manner of the above-mentioned method embodiment can be applied to this network-side device embodiment, and can achieve the same technical effect.
  • the embodiment of the present application also provides a network side device.
  • the network side device 2300 includes: a processor 2301, a network interface 2302, and a memory 2303.
  • the network interface 2302 is, for example, a common public radio interface (CPRI).
  • CPRI common public radio interface
  • the network side device 2300 in this embodiment of the present invention also includes: instructions or programs stored in the memory 2303 and executable on the processor 2301.
  • the processor 2301 calls the instructions or programs in the memory 2303 to execute the above-mentioned packet message.
  • the transmission method and achieve the same technical effect will not be described in detail here to avoid duplication.
  • Embodiments of the present application also provide a readable storage medium.
  • Programs or instructions are stored on the readable storage medium.
  • the program or instructions are executed by a processor, each process of the above-mentioned packet message transmission method embodiment is implemented, and can To achieve the same technical effect, to avoid repetition, we will not repeat them here.
  • the processor is the processor in the terminal described in the above embodiment.
  • the readable storage medium includes computer readable storage media, such as computer read-only memory ROM, random access memory RAM, magnetic disk or optical disk, etc.
  • An embodiment of the present application further provides a chip.
  • the chip includes a processor and a communication interface.
  • the communication interface is coupled to the processor.
  • the processor is used to run programs or instructions to implement the above packet message transmission method.
  • Each process in the example can achieve the same technical effect. To avoid repetition, we will not repeat it here.
  • chips mentioned in the embodiments of this application may also be called system-on-chip, system-on-a-chip, system-on-chip or system-on-chip, etc.
  • Embodiments of the present application further provide a computer program/program product.
  • the computer program/program product is stored in a storage medium.
  • the computer program/program product is executed by at least one processor to implement the above packet message transmission method.
  • Each process of the embodiment can achieve the same technical effect, so to avoid repetition, it will not be described again here.
  • the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation.
  • the technical solution of the present application can be embodied in the form of a computer software product that is essentially or contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM/RAM, disk , CD), including several instructions to cause a terminal (which can be a mobile phone, computer, server, air conditioner, or network device, etc.) to execute the methods described in various embodiments of this application.

Abstract

本申请公开了一种分组报文传输方法、网络设备及可读存储介质,属于通信网络领域,本申请实施例的分组报文传输方法,由第一网络设备执行,所述方法包括:获取待发送的至少一个分组报文;将所述至少一个分组报文配置到第一分组虚拟时隙帧发送。

Description

分组报文传输方法、网络设备及可读存储介质
相关申请的交叉引用
本申请要求在2022年09月05日提交的中国专利申请第202211079174.2号的优先权,该中国专利申请的全部内容通过引用包含于此。
技术领域
本申请属于通信网络技术领域,具体涉及一种分组报文传输方法、网络设备及可读存储介质。
背景技术
分组网络以分组报文(通常称为Packet)为基本转发单元,基于报文头部的信息决定转发路径和转发服务质量(Quality of Service,Qos),传统分组转发的Qos体系一般采用差分服务(DiffServ)模式,提供有限程度的确定性转发能力,尤其对于转发的时延和抖动性能,分组转发的确定性能力取决于承载业务的流量到达特征(Traffic Arrival Specification)、分组转发网络转发拓扑和限制条件(Topology & Constrains)以及分组网络即时状态,转发过程中往往存在瞬时拥塞甚至丢包。由此,相关技术中的分组转发被认为是尽力而为(Best-effort)转发模式,无法提供确定性时延和抖动转发。
发明内容
本申请实施例提供一种分组报文传输方法、网络设备及可读存储介质,能够解决无法提供确定性时延和抖动的分组报文转发的问题。
第一方面,提供了一种分组报文传输方法,由第一网络设备执行,所述方法包括:获取待发送的至少一个分组报文;将所述至少一个分组报文配置到第一分组虚拟时隙帧发送。
第二方面,提供了一种分组报文传输方法,由第三网络设备执行,所述 方法包括:接收第一网络设备通过第一分组虚拟时隙帧发送的至少一个分组报文;基于所述至少一个分组报文的到达时间,以分组业务流的方式发送接收到的所述至少一个分组报文。
第三方面,提供了一种网络设备,该网络设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤,或者实现如第二方面所述的方法的步骤。
第四面,提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤,或者实现如第二方面所述的方法的步骤。
第五方面,提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法的步骤,或实现如第二方面所述的方法的步骤。
第六方面,提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现如第一方面所述的方法的步骤,或实现如第二方面所述的方法的步骤。
附图说明
图1示出本申请实施例一种分组报文传输方法的流程示意图;
图2示出本申请实施例中一种PVSF结构示意图;
图3示出本申请实施例中一种时隙开销的位置示意图;
图4示出本申请实施例中一种时隙开销信息的示意图;
图5示出本申请实施例中PVSF指示信息位于多种分组报文中的位置示意图;
图6示出本申请实施例中一种分组报文缓存示意图;
图7示出本申请实施例中一种时隙队列的调度示意图;
图8示出本申请实施例中另一种时隙队列的调度示意图;
图9示出本申请实施例中多业务与虚拟时隙单元的映射示意图;
图10示出本申请实施例中另一种分组报文传输方法的流程示意图;
图11示出本申请实施例中一种分组报文转发的网络示意图;
图12示出本申请实施例中一种基于IPv6分组报文的分组虚拟时隙帧的结构示意图;
图13示出本申请实施例中一种基于IPv6分组报文的分组虚拟时隙帧的开销信息位置示意图;
图14示出本申请实施例中一种基于IPv6分组报文的时隙队列示意图;
图15示出本申请实施例中一种基于IPv6分组报文的时隙队列调度示意图;
图16示出本申请实施例中一种基于IPv6分组报文的多业务调度示意图;
图17示出本申请实施例中一种基于MPLS分组报文的分组虚拟时隙帧的结构示意图;
图18示出本申请实施例中一种基于MPLS分组报文的分组虚拟时隙帧的开销信息位置示意图;
图19示出本申请实施例中一种基于MPLS分组报文的定长分片示意图;
图20示出本申请实施例中一种基于MPLS分组报文转发示意图;
图21示出本申请实施例中一种基于SRv6分组报文的分组虚拟时隙帧的结构示意图;
图22示出本申请实施例中一种基于SRv6分组报文的分组虚拟时隙帧的开销信息位置示意图;
图23示出本申请实施例提供的一种网络侧设备的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行 清楚描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”所区别的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”一般表示前后关联对象是一种“或”的关系。
值得指出的是,本申请实施例所描述的技术不限于长期演进型(Long Term Evolution,LTE)/LTE的演进(LTE-Advanced,LTE-A)系统,还可用于其他无线通信系统,诸如码分多址(Code Division Multiple Access,CDMA)、时分多址(Time Division Multiple Access,TDMA)、频分多址(Frequency Division Multiple Access,FDMA)、正交频分多址(Orthogonal Frequency Division Multiple Access,OFDMA)、单载波频分多址(Single-carrier Frequency Division Multiple Access,SC-FDMA)和其他系统。本申请实施例中的术语“系统”和“网络”常被可互换地使用,所描述的技术既可用于以上提及的系统和无线电技术,也可用于其他系统和无线电技术。以下描述出于示例目的描述了新空口(New Radio,NR)系统,并且在以下大部分描述中使用NR术语,但是这些技术也可应用于NR系统应用以外的应用,如第6代(6th Generation,6G)通信系统。
下面结合附图,通过一些实施例及其应用场景对本申请实施例提供的分组报文传输方案进行详细地说明。
图1示出本申请实施例中的分组报文传输方法的一种流程示意图,该方法100可以由第一网络设备执行。换言之,所述方法可以由安装在第一网络 设备上的软件或硬件来执行。如图1所示,该方法可以包括以下步骤。
S110,获取待发送的至少一个分组报文。
S112,将所述至少一个分组报文配置到第一分组虚拟时隙帧发送。
在本申请实施例中,将所述至少一个分组报文配置到第一分组虚拟时隙帧发送,也可以表述为将所述至少一个分组报文承载到第一分组虚拟时隙帧发送,或者,将所述至少一个分组报文映射到第一分组虚拟时隙帧发送等。
在本申请实施例中,分组虚拟时隙帧(Packet-based Virtual Slotted Frame,PVSF)为具有时分复用特征的帧结构,该分组虚拟时隙帧可以按照一定的方式周期性的发送。
在本申请实施例中,通过在分组报文流中定义虚拟的具有时分复用特征的帧结构,通过该分组虚拟时隙帧传输分组报文,实现分组架构下确定性转发能力,保障优化的确定性转发时延和抖动性能。
在本申请实施例的一个可能的实现方式中,所述第一分组虚拟时隙帧可以包括至少一个采用时分复用结构的分组虚拟容器,所述分组虚拟容器包括至少一个虚拟时隙单元,所述虚拟时隙单元包括至少一个最小时隙单元,一个所述最小时隙单元用于承载所述至少一个分组报文的一个分组信息单元。
在本申请实施例中,分组虚拟时隙帧可以由分组报文(Packet)(即分组报文承载在分组虚拟时隙帧中)组成具有时分复用特征,即一个或多个PVSF帧基于名义固定时间间隔重复出现(“名义”的含义是指理想的情况,实际情况会相比理想情况存在允许内的偏差,比如PVSF帧结构的名义固定时间间隔在以太网接口实际典型存在+-100PPM的偏差)。PVSF帧包括可以灵活定义的分组虚拟容器(Packet-based Virtual Container,PVC),PVC具有时分复用结构的特性,即通过指定的时分复用结构形成PVSF用于承载具有确定性需求的业务。PVC容器的时分复用结构可以是可循环嵌套的层次化复用结构,最简单的场景PVSF只包括一个PVC,此时PVC和PVSF基本可等同。PVC进一步可以包含组成PVC的虚拟时隙单元(Virtual Slotted Unit,VSU),VSU 由归属它的一个或多个分组报文组成(即VSU可以承载一个或多个分组报文);一个VSU包含一个或多个最小时隙单元(Minimum Slotted Unit,MSU),MSU可以由包括时隙开销信息的分组报文头和净荷构成。最简单的场景中,PVC只包括一个VSU,此时PVC和VSU基本可等同;一个VSU最极端的场景只包含一个分组信息单元。如图2所示,通过PVSF、PVC和VSU共同构成分组虚拟时隙帧结构。
在本申请实施例中,第一网络设备获取的至少一个分组报文可以是第一网络设备感知接入第一网络设备的分组业务获取的至少一个分组报文,也可以是从第二网络设备发送的分组业务流中获取的至少一个分组报文,还可以是其它网络设备通过第二分组虚拟时隙帧发送的分组报文,下面分别针对以上各种实现方式进行说明。
(一)第一网络设备感知接入第一网络设备的分组业务获取的至少一个分组报文或从第二网络设备发送的分组业务流中获取的至少一个分组报文
在该种实现方式中,第一网络设备可以作为支持PVSF的报文入口节点,感知或接收到的分组业务流的分组报文封装到PVSF中发送。例如,第一网络设备按照网络侧接口NNI进行识别第二网络设备发送的分组业务流,或者,第一网络设备通过业务接入PE业务侧接口UNI进行识别分组报文。
基于上述分组虚拟时隙帧结构,在一个可能的实现方式中,S120中将所述至少一个分组报文配置到第一分组虚拟时隙帧发送,包括:将所述至少一个分组报文静态或动态配置到与所述至少一个分组报文的业务相关信息对应的目标虚拟时隙单元发送,其中,所述目标虚拟时隙单元为所述第一分组虚拟时隙帧的一个虚拟时隙单元,所述业务相关信息包括以下至少之一:业务类型、业务标识、业务优先级、业务所需带宽、时延、抖动、丢包率。
例如,在S110中,第一网络设备通过感知接入第一网络设备的分组业务,获取到至少一个分组报文时,然后,在S120中,可以根据该分组报文的业务类型、业务标识、业务优先级、业务所需带宽等中的至少之一,将到少一个 分组报文映射到与该业务类型对应的虚拟时隙单元上。例如,业务1固定映射到VSU ID=1。又例如,如果分组报文的业务所需带宽为50M,则可以将该分组报文映射到与该带宽对应大小的虚拟时隙单元,以避免将分组报文分片映射到不同的虚拟时隙单元,而增加开销信息。
其中,分组报文与虚拟时隙单元的映射关系,可以在S112之前确定。例如,第一网络设备可以根据预设信息,确定分组报文与虚拟时隙单元的映射关系,该预设信息可以包括以下至少之一:所述第一网络设备的能力信息、承载的业务的需求、网络环境。
又例如,第一网络设备作为的支持PVSF网络的边界节点,在S110中,接收第二网络设备发送的分组业务流,从分组业务流中获取所述至少一个分组报文,然后,基于到达的分组报文的时延、抖动、丢包率中的至少之一,确定将至少一个分组报文映射到目标虚拟时隙单元。例如,根据至少一个分组报文的时延和抖动,确定需要缓存的多少分组报文才进行发送,再根据需要缓存的分组报文的数量,确定对应的目标时隙单元。
在另一个可能的实现方式中,在S112中,也可以采用动态配置的方式,将所述至少一个分组报文配置到第一分组虚拟时隙帧发送。在该可能的实现方式中,将所述至少一个分组报文配置到第一分组虚拟时隙帧发送可以包括:将所述至少一个分组报文配置到所述第一分组虚拟时隙帧的目标虚拟时隙单元发送,其中,所述目标虚拟时隙单元为最近的可用的待发送虚拟时隙单元。例如,在S110中获取到至少一个分组报文时,当前第一分组虚拟时隙帧正在发送VSU1,后面待发送的包括VSU2和VSU3,其中,VSU2允许的最大字节总数为50M,VSU3允许的最大字节总数为100M,如果待发送的分组报文为30M,则最近可用的时隙单元为VSU2,如果待发送的分组报文为80M,则最近可用的时隙单元为VSU3。通过该可能的实现方式,可以减少分组报文的时延。
为了使得接收端可以正确接收分组报文,在一个可能的实现方式中,第 一分组虚拟时隙帧可以具有帧开销信息,该帧开销信息可以包括帧结构指示符和帧标识符,例如,帧结构指示符可以包括帧起始指示符、帧结束指示符、帧延续指示符等中的一个或多个,帧标识符用于标识所述第一分组虚拟时隙帧。例如,在每2个PVSF帧组成复帧循环的情况下,PVSF帧标识符(ID)取值可以为0和1。
在一个或多个可能的实现方式中,在S120中,将所述至少一个分组报文配置到第一分组虚拟时隙帧发送可以包括以下步骤。
步骤1,将所述目标虚拟时隙单元的时隙开销携带在所述至少一个分组报文的预定位置,其中,所述时隙开销包括所述第一分组虚拟时隙帧的帧开销信息,所述帧开销信息包括:帧结构指示符和帧标识符。
步骤2,将携带有所述时隙开销的至少一个分组报文配置在所述目标虚拟时隙单元中发送。
在上述可能的实现方式中,如图3所示,预定位置可以是分组报文的头部、净荷区头部或净荷区尾部等方便识别的位置。
可选的,所述帧开销信息还可以包括以下至少之一:所述至少一个分组报文的业务相关信息、维护诊断信息、管理信息、所述目标虚拟时隙单元所属的分组虚拟容器的指示信息、所述目标虚拟时隙单元的指示信息。
其中,维护诊断信息可以提供随路OAM诊断、保护倒换和无损调整等开销信息,支持诊断虚拟分组时隙帧转发的误码、时延等性能测量,APS协议传递和无损调整指令的传递。
在本实施例的可选实现方式中,PVSF可以具有帧起始指示符和可选的帧结束指示符以及可选的帧开销信息。PVC可以具有容器时分复用指示信息以及可选的容器开销信息。PVSF帧起始指示符可以名义时间主动插入指示报文,也可以同时携带在VSU开销中。PVC容器指示信息等开销信息可以携带在主动插入的开销报文,也可以随归属PVC的分组报文头携带。VSU时隙指示信息和其他开销信息可以携带在主动插入的带开销的报文,也可以 随业务VSU报文携带。
在一个可能的实现方式中,将携带有所述时隙开销的至少一个分组报文配置在所述目标虚拟时隙单元中发送可以包括:在所述至少一个分组报文中的一个分组报文的业务数据中增加所述时隙开销后形成的第一报文,将所述第一报文配置到所述目标虚拟时隙单元的一个最小时隙单元中发送。也就是说,VSU承载业务的MSU中可以是原始承载的业务增加时隙开销后的形成的VSU报文。
在另一个可能的实现方式中,将携带有所述时隙开销的至少一个分组报文配置在所述目标虚拟时隙单元中发送可以包括:将所述至少一个分组报文的业务数据按照预定长度进行转换后形成的第二报文,将所述第二报文配置到所述目标虚拟时隙单元的一个最小时隙单元中发送,所述预定长度为不超过一个最小时隙单元所能承载的数据大小。也就是说,VSU承载业务的MSU中也可以是原始承载的业务的报文进行转换后形成的报文(比如进行固定长度分组信息单元的分片和重组、空闲信息的填充或多业务封装聚合等)。
在本申请实施例中,时隙开销包括但不限于PVSF/PVC/VSU/MSU的起始信号、结束信号、承载业务的相关信息(业务标识,类型)、容器的相关信息(容器标识,容器内时隙标识)、维护诊断信息(OAM)和其他辅助信息等,如图4所示。为了提升承载效率,上述信息可以通过编码设计减少不必要的指示信息,或通过类似TDM的复帧开销图案减少每个分组报文的携带信息。
例如,如图5所示,时隙开销(也可以称为PVSF指示信息)对于分组L2层以太网分组报文可定义在媒体接入控制层(Medium Access Control,MAC)帧头、帧尾或其他扩展区域,对于分组多协议标签交换(MPLS)分组报文可以定义在MPLS标签栈、标签栈与净荷之间的扩展区域、报文尾部中的之一的位置。对于网络层IP的分组报文,所述时隙开销可以携带在以下之一的位置:所述分组报文的IP头部区域、逐跳扩展头(HBH)和目的扩展 头(DOH)区域、可扩展选型(Opitons)区域。对于段路由(Segment Routing)分组报文,所述时隙开销可以携带路由列表或段路由的扩展区域;对于分组L3层以太网分组报文,时隙开销可以在IPv4/IPv6帧头、帧尾或其他扩展头(包括HBH、DOH、SRH等)以及扩展区域;对于分组L4层以太网分组报文,时隙开销可以放在传输层或应用层封装或其他扩展信息区域。为了提供封装效率,上述信息可以通过紧凑编码或压缩编码的方式具体实现。
在一个或多个可能的实现方式中,在将所述至少一个分组报文配置到与所述至少一个分组报文的属性信息对应的目标虚拟时隙单元发送之前,所述方法还可以包括:根据预设信息,确定以下至少之一。
(1)所述第一分组虚拟时隙帧的结构。
例如,基于IPv6分组报文定义PVSF帧结构,设计PVSF帧结构由1个PVC构成,如果不考虑PVSF多路汇聚,可以不用额外的PVSF帧ID标识,此时PVC标识可以和PVSF标识合一,每个PVC定义10个VSU,每个VSU支持一个最长1000Byte的分组报文发送,VSU名义时隙大小为10us。承载的确定新业务ID范围取值16bit,可以根据承载业务情况进行灵活分配。
(2)分组报文与所述第一分组虚拟时隙帧的虚拟时隙单元的映射关系,所述映射关系包括分组报文的业务相关信息与虚拟时隙单元的对应关系。例如,分组报文的业务标识与虚拟时隙单元的对应关系,分组业务1对应VSU1,即分组业务1的分组报文固定映射到VSU1上。
(3)所述第一分组虚拟时隙帧的发送方式,所述第一分组虚拟时隙帧的发送方式包括:连续发送的分组虚拟时隙帧的数量n、两组连续发送的n个分组虚拟时隙帧之间的间隔。例如,每2个PVSF帧组成复帧循环,每2个PVSF帧之间的间隔为3ms。
在上述一个或多个可能的实现方式中,第一网络设备感知接入所述第一网络设备的分组业务或接收第二网络设备发送的分组业务流获取所述至少一个分组报文的情况下,可选的,在S120中,将所述至少一个分组报文配置到 第一分组虚拟时隙帧发送可以包括以下步骤。
步骤1,将所述至少一个分组报文缓存到时隙队列,其中,所述时隙队列的结构与所述第一分组虚拟时隙帧的结构对应。
步骤2,在所述时隙队列缓存的分组报文的数量达到第一门限的情况下,获取所述时隙队列中的分组报文,将获取的所述分组报文配置在所述第一分组虚拟时隙帧中发送。
在上述可选的实现方式中,第一网络设备可以将获取的至少一个分组报文缓存到时隙队列中,直至时隙队列中缓存的分组报文的数量达到第一门限,再从时隙队列中获取分组报文,并配置在第一分组虚拟时隙帧中发送。
可选地,所述第一门限根据第一时延和第一抖动确定,其中,所述第一时延为所述至少一个分组报文的时延中的最大值,所述第一抖动为所述至少一个分组报文的抖动中的最大值。例如,第一网络设备通过接收第二网络设备发送的分组业务流获取分组报文的情况下,可以根据到达的多个分组报文的最大时延和最大抖动,确定需要缓存的分组报文的数量。
(二)接收通过第二分组虚拟时隙帧发送的分组报文
在该实现方式中,第一网络设备作为分组报文的转发节点,将通过第二分组虚拟时隙帧发送的分组报文调度到第一分组虚拟时隙帧上发送。
在一个可能的实现方式中,S110中,所述获取待发送的至少一个分组报文,包括以下步骤。
步骤1,接收到第二分组虚拟时隙帧的帧起始指示符或代表第二分组虚拟时隙帧的帧头位置的第一分组报文。
在具体应用中,可以通过在分组报文中携带帧起始指示符指示第二分组虚拟时隙帧的帧头位置,也可以通过代表第二分组虚拟时隙帧的帧头位置的第一分组报文指示第二分组虚拟时隙帧的帧头位置。
步骤2,从所述帧起始指示符或所述第一分组报文开始,将接收到的分组报文缓存到空闲的时隙队列,直至满足预设条件,其中,所述时隙队列的 结构与所述第二分组虚拟时隙帧的结构相应,所述预设条件包括以下之一:接收到正确的帧结束指示符;所述时隙队列为满;接收到预期代表新的帧的帧头位置的第二分组报文;接收到预期新的帧的帧起始指示符。
在上述可能的实现方式中,可选的,在预定时间内未满足所述预设条件的情况下,丢弃接收到的分组报文。
在本申请实施例中,由于分组报文是通过第二分组虚拟时隙帧发送的,而下一个分组虚拟时隙帧的到达是可以预期的,因此,在一个可能探病同方式中,在接收到第二分组虚拟时隙帧的帧起始指示符或代表第二分组虚拟时隙帧的帧头位置的第一分组报文之后,所述方法还可以包括:根据所述第二分组虚拟时隙帧的帧头位置的第一到达时刻,预估所述第二分组虚拟时隙帧后的下一个分组虚拟时隙帧的帧头位置的第二到达时刻,并在所述第二到达时刻的预设范围内,检测所述下一个分组虚拟时隙帧的帧头位置的分组报文。其中,第一到达时刻可以为帧起始指示符或所述第一分组报文的到达时刻。通过该实现方式,第一网络设备可以在第二到达时刻的预设范围检测下一个分组虚拟时隙帧的帧头位置的分组报文,从而可以避免第一网络设备需要一直检测下一个分组虚拟时隙帧是否到达,节约第一网络设备的能耗。
PVSF设计有名义的到达时刻,但考虑到分组网络存在的固有频率偏差或分组收发抖动,PVSF帧起始的到达时刻存在虚拟帧的定帧机制维持PVSF时隙化结构,因此,上述可能的实现方式中在第二到达时刻的预设范围内检测下一个分组虚拟时隙帧。
在上述可能的实现方式中,可选的,在预估所述第二分组虚拟时隙帧后的下一个分组虚拟时隙帧的帧头位置的第二到达时刻之后,所述方法还可以包括:在所述第二到达时刻的预设范围之内未检测到下一个分组虚拟时隙帧的帧头位置的分组报文到达的情况下,或者,在所述第二到达时刻的预设范围之外,检测到所述下一个分组虚拟时隙帧的帧头位置的分组报文到达的情况下,确定所述下一个分组虚拟时隙帧出现异常。在确定下一个分组虚拟时 隙帧出现异常之后,重新检测到新的分组虚拟时隙帧的帧头位置的分组报文,则可以重新开始上述检测过程。
在本申请实施例中,可选的,所述时隙队列可以包括:分组虚拟时隙帧队列;所述时隙队列还包括以下之一:从属于所述分组虚拟时隙帧队列的至少一个分组虚拟容器队列、从属于所述分组虚拟容器队列的至少一个虚拟时隙单元队列,其中,不同的分组虚拟容器承载的分组报文缓存到对应的分组虚拟容器队列,不同的虚拟时隙单元承载的分组报文缓存到对应的虚拟时隙单元队列。
在本申请实施例中,根据具体实现的不同,时隙队列的大小可以从最小队列深度到最大PVSF帧长之间变化(包括必要的容忍隔离间隙)。
在一个可能的实现方式中,所述将接收到的分组报文缓存到空闲的时隙队列可以包括:根据接收到的分组报文携带的帧开销信息,将接收到的所述分组报文缓存到所述时隙队列中与所述帧开销信息对应位置。通过该可选的实现方式,第一网络设备可以根据至少一个分组报文的时隙开销信息,确定各个分组报文缓存的分组虚拟容器队列及虚拟时隙单元队列。
例如,在本申请实施例中,时隙队列(SQ)入向支持PVSF归属的分组报文接收(入队),当支持PVSF功能的节点入口识别到PVSF归属的分组报文到达,可以从帧起始信号指示开始将该信号指示包含的PVSF分组报文接收到一个空闲的时隙队列,直到接收到帧结束信号(如果有),或者队列为满,或者新的帧起始信号到达,如图6所示。
其中,SQ队列结构,可以有PVSF、PVC和VSU分别独立的队列或逻辑队列结构(共享物理队列)。如果有独立的PVC队列,PVC队列从属于PVSF,则不同的PVC分组报文根据其容器标识进入PVC队列;同样,如果存在VSU队列,则不同VSU的分组报文根据其时隙标识进入VSU队列,VSU队列从属于PVC。如果采用逻辑队列结构,则PVC和VSU共享PVSF队列资源,通过分组报文携带的标识信息进行区分。不同的PVSF、PVC和VSU根据其 帧结构的标识信息确定相对位置,同一个VSU内的分组报文可以根据设定的策略确定入队顺序。
在一个可能的实现方式中,在通过第二分组虚拟时隙帧接收到至少一个分组报文之后,在S120中,将所述至少一个分组报文配置到第一分组虚拟时隙帧发送可以包括:在所述时隙队列中缓存的分组报文的数量达到第二门限的情况下,调度所述时隙队列中的分组报文至所述第一分组虚拟时隙帧发送。
可选的,所述第二门限小于或等于预设值,所述预设值为一个所述第一分组虚拟时隙帧的帧长或一个所述第一分组虚拟时隙帧允许的最大字节总数。
在本申请实施例中,第一网络设备可根据分组网络的性能设置合适的第二门限,例如,可以结合抖动容忍性能进行设置,第二门限可以按照分组虚拟时隙帧接收的时间或等价指标(比如对应的缓存帧数或字节数)设置。第二门限通常可以根据系统转发开销设置,尽量小减少队列排队等待引起的转发时延。
可选的,所述第一分组虚拟时隙帧的第一帧起始位置与所述第二分组虚拟时隙帧的第二帧起始位置之间的相位差不大于预设阈值。也就是说,PVSF出队节点的出向名义帧起始位置可以与入向不严格相位对齐,可以通过增加缓存的方式匹配出向和入向的相位差异。
在本申请实施例中,由于入向和出向均根据分组虚拟时隙帧结构进行转发,转发时延和抖动具有确定性上下界。入向和出向PVSF帧结构的相对名义频率符合预设的要求(比如在以太网+-100ppm范围内),相对名义相位(PVSF帧头的位置)不严格要求同步,入向和出向的名义相位偏差会超过1/2的分组虚拟时隙帧的发送时间Tf
在一个可能的实现方式中,所述调度所述时隙队列中分组报文至所述第一分组虚拟时隙帧发送可以包括:根据所述时隙队列中的分组报文所属的第一分组虚拟容器和/或第一分组时隙单元,将所述分组报文调度到所述第一分组虚拟时隙帧的第二分组虚拟容器和/或第二分组时隙单元,其中,所述第一 分组虚拟容器和/或第一组分时隙单元与所述第二分组虚拟容器和/或第二分组时隙单元具有交换关系。在该可能的实现方式中,第一分组虚拟时隙帧与第二分组虚拟时隙帧可以按照静态配置的时隙交换表进行调度,根据PVSF/PVC/VSU帧结构信息进行调度,例如,根据控制面配置的入向PVSF和出向PVSF的PVC/VSU交换关系,实现入向PVC/VSU到出现PVC/VSU的时隙化调度。
在另一个可能实现方式中,所述调度所述时隙队列中分组报文至所述第一分组虚拟时隙帧发送,包括:按照不同目标分组报文在所述第二分组虚拟时隙帧中的相对位置,将所述时隙队列中的目标分组报文调度至所述第一分组虚拟时隙帧发送,其中,不同目标分组报文之间在所述第一分组虚拟时隙帧中的相对位置与在所述第二分组虚拟时隙帧中的相对位置相同,不同所述目标分组报文的目的地址相同。也就是说,在该可能的实现方式中,根据PVSF/PVC/VSU帧结构信息确定性分组报文的相对顺序,调度仍然根据分组报文头进行分组转发,分组转发可以沿用现有的机制,包括MAC、MPLS、IP、SR等,由于入队时分组报文已经按照PVSF帧结构信息进行排序,因此,出向调度时也能按照预期的PVSF帧结构发送,如图7所示。
可选的,所述第二分组虚拟时隙帧和所述第一分组虚拟时隙帧与相同的同步基准对齐,或者所述第二分组虚拟时隙帧与所述第一分组虚拟时隙帧之间存在稳定的相位偏差。也就是说,在本申请实施例中,时隙化调度不严格要求入向PVSF和出现PVSF名义相位的严格同步,但可以维持入向帧结构和出向帧结构之间相对稳定的映射关系,维持映射关系的方法可以通过时钟同步、对齐相同的同步基准、维持相对固定偏差的虚拟时钟等技术实现。映射关系的偏差由上述第二门限提供容忍,如果相位同步的情况下可实现零排队时延调度。
可选的,如果所述第二分组虚拟时隙帧发送速率与第一分组虚拟时隙帧不同,在保持相同PVSF帧结构及帧结构包含的分组包数或字节总数相同的 情况下,可以进行相应的队列和调度速率适配处理。
在一个可能实现方式中,可能有多路PVSF汇聚到第一网络设备,因此,在该可能的实现方式中所述调度所述时隙队列中分组报文至所述第一分组虚拟时隙帧发送可以包括:在多个所述时隙队列中缓存的分组报文的数量达到第二门限的情况下,采用间插轮转的调度方式,将多个所述时隙队列中的分组报文调度至所述第一分组虚拟时隙帧发送,其中,所述间插轮转的调度方式是在依次调度各个所述时隙队列缓存的第一个缓存单元,再依次调度各个所述时隙队列缓存的下一个缓存单元,直至调度完各个所述时隙队列中缓存的缓存单元,一个缓存单元缓存一个虚拟时隙单元承载的分组报文。
例如,如果存在多路PVSF汇聚时,可以采用基于PVSF帧结构的PVSF/PVC/VSU间插机制实现平滑和低时延低抖动的汇聚,如图8所示。其中,VSU内不同业务报文也可以通过类似分组通用处理器共享调度(Packet Generalized Processor Sharing,PGPS)进一步提升转发的时延和抖动性能,如图9所示。另外,PVSF/PVC/VSU也可以支持同时隙结构流量聚合的方式实现汇聚。
当有多业务同时接入到PVSF时,也可以配置不同的业务接入映射到PVSF帧结构的策略,包括固定虚拟时隙映射,即不同的业务固定接入到指定的PVSF的PVC的VSU;或随机接入VSU的周期接入方式;也包括类似PGPS的分组方式接入VSU等。
在本申请实施例中,时隙队列(SQ)按照PVSF报文流入队和出队,由于SQ操作对象为PVSF的分组报文,因此,出向发送的速率大于(入向多个PVSF接收然后汇聚为多个PVSF发送)或等于(入向1个PVSF接收然后出向1个PVSF发送)入向接收速率确保时隙队列无拥塞。PVSF从入向到出向的转发由时隙化调度功能完成。
可选的,在调度所述时隙队列中的分组报文至第一分组虚拟时隙帧发送的出口速率与缓存分组报文至所述时隙队列的入口速率不同的情况下,所述 第一分组虚拟时隙帧的各个虚拟时隙单元的大小与所述第二分组虚拟时隙帧的大小不完全相同,且所述第一分组虚拟时隙帧的帧结构与所述第二分组虚拟时隙帧的帧结构相同,所述第一分组虚拟时隙帧允许的分组报文数或字节总数与所述第二分组虚拟时隙帧允许的分组报文数或字节总数相同。例如,如果时隙队列的出口速率相比入口速率提升,则第一分组虚拟时隙帧的虚拟时隙单元大小可以相应进行缩小,如果时隙队列的出口速率相比入口速率下降,则第一分组虚拟时隙帧的虚拟时隙单元大小可以相应进行帧大,但第一分组虚拟时隙帧的帧结构与第二分组虚拟时隙帧的帧结构相同。通过该可选的实现方式,可以减少分组报文的转发时延。
在本申请实施例中,通过定义虚拟的具有时分复用特征的帧结构,设计支持该虚拟帧结构的时隙队列,引入时隙化调度方法,实现分组架构下确定性转发能力,保障优化的确定性转发时延和抖动性能。
在本申请实施例中,由于通过分组虚拟时隙帧发送分组报文,也是按照时间的先后顺序发送的,因此,不支持PVSF功能的节点,可以按照普通分组报文转发可以支持PVSF转发机制穿越普通分组节点。
因此,本申请实施例还提供了另一种分组报文传输方法,该方法由第三网络设备执行,图10示出本申请实施例提供的另一种分组报文传输方法的流程示意图,如图10所示,该方法1000主要包括以下步骤。
S1010,接收第一网络设备通过第一分组虚拟时隙帧发送的至少一个分组报文。
其中,第一网络设备可以按照上述方法100中所描述的方式将至少一个分组报文配置到第一分组虚拟时隙帧上发送,具体可以参见上述方法100中的相关描述,在此不再赘述。
S1020,基于所述至少一个分组报文的到达时间,以分组业务流的方式发送接收到的所述至少一个分组报文。
第三网络设备在以分组业务流的方式发送所述至少一个分组报文时,还 可以将分组报文进行分组发送。例如,第三网络设备可以通过解析每个分组报文的报文头,获取每个分组报文的目的地址,将相同的目的地址的分组报文通过同一个分组业务流发送。
为了减少抖动,第三网络设备还可以将到达的分组报文缓存,直至缓存的分组报文达到预定门限,再以分组业务流的方式发送分组报文。
例如,在图11中,PVSF边界节点1和PVSF边界节点2可以采用上述第一网络设备,PVSF边界节点以PVSF对分组报文进行边界封装转换后发送,分组报文到达普通分组网络中的转发节点(例如,上述第三网络设备),第三网络设备按照普通分组报文转发,即以分组业务流转发,PVSF边界节点2接收分组业务流后,缓存到达的分组报文,重建PVSF帧队列,通过PVSF帧转发分组报文。
由于本申请实施例中采用分组转发报文头扩展的方式,对于常见分组网络不能识别的扩展内容仍然能支持基本的分组转发,比如IPv6对于不识别的扩展头只根据IPv6的DA进行转发,穿越普通分组网络只是在定义的PVSF帧结构内引入了大的时延或抖动,此时需要在穿越普通分组网络的边界PVSF节点增加“重建PVSF帧队列”的功能,该功能相比PVSF帧入口处理只是增大了缓存等待的时间用于消除穿越普通分组网络带来的额外时延或抖动。简单的缓存机制需要增加额外的缓存大小用于覆盖分组网络最差的时延或抖动,例如分组网络最大的时延或抖动为200us,则PVSF边界节点“重建PVSF帧队列”最大将增加到200us的缓存能力。对于穿越普通分组网络可能存在的拥塞丢包等因素不在本实施例考虑范围内,可以通过可靠性增强的技术解决。
另外,对于MPLS等分组转发技术,需要感知到穿越普通分组网络带来的差异时,需要在穿越普通分组网络的边界节点发送侧进行一定的“边界封装转换”功能,类似分组设备其他的边界节点转发功能,PVSF边界节点的封装转换功能简单的处理可以在PVSF扩展报文头的外部再封装一层穿越普 通分组网络的报文头,穿越完成后剥离该增加的外层头即可继续后续的PVSF转发功能。
下面通过具体类型的分组报文对本申请实施例提供的技术方案进行说明。
(一)基于IPv6变长分组报文实现确定性转发
针对IPv6变长分组报文,可以通过以下步骤实现确定性转发。
步骤1,对于PVSF入口节点(即分组业务的感知节点或PVSF边界节点),例如,PE节点,可以先基于IPv6分组报文定义PVSF帧结构,每2个PVSF帧组成复帧循环,即该IP子接口(可以沿用IP地址标识)下的PVSF帧ID取值为0和1。设计PVSF帧结构由1个PVC构成,如果不考虑PVSF多路汇聚,可以不用额外的PVSF帧ID标识,此时PVC标识可以和PVSF标识合一,每个PVC定义10个VSU,每个VSU支持一个最长1000Byte的分组报文发送,VSU名义时隙大小为10us。承载的确定新业务ID范围取值16bit,可以根据承载业务情况进行灵活分配,其帧结构示意如图12所示。
步骤2:PVSF入口节点基于IPv6 HBH扩展头设计PVSF开销信息(Overhead),包括PVSF起始信号指示符2bit(00表示PVSF帧开始,10和01表示帧延续,11表示帧结束);PVSF/PVC ID用1bit表示,0和1交替循环;VSU ID用4bit表示,可用范围为0-9,其余为无效值,结合PVSF/PVC ID,可以有20个可区分的VSU;设计16bit表示业务ID,不同的业务可以灵活接入到不同的VSU时隙;最后可以对PVSF开销信息有一个CRC8的校验,其开销设计如图13所示。
步骤3:PVSF入口节点根据PVSF帧结构设计,确定分组报文映射的虚拟时隙单元。假定分配给PVSF所在IP子接口的保证带宽为10Gbps,PVSF总共可支持20个VSU单位,每个VSU的名义带宽为10Gbps/(2*10)=500Mbps。确定性业务可以请求1-20个VSU时隙单位,业务与时隙的映射关系可以是静态固定的,比如业务1固定映射到VSU ID=1;也可以是动态的映射,比如业务1请求1个VSU单位,即每PVSF中保证1个VSU机会 提供给业务1,当业务1的流量到达,动态分配最近可用的VSU承载该业务,为了简便,本实施例假定采用静态固定映射方案。如果PVSF空闲,则PE节点可主动插入一个携带PVSF帧起始信息的定位报文用于维持PVSF相对稳定的相位关系。同样,PE节点多业务接入时,根据PVSF时隙分配和映射算法提供多业务的接入。
步骤4:PVSF转发节点(即支持PVSF功能的转发节点)基于当前分组队列和调度机制,设计FVSF时隙队列和时隙化调度功能。简化设计可以每IP子接口设计2个PVSF时隙队列,每个队列最大长度为缓存1个PVSF帧,即最大20*1000Byte=160kbit,队列中能提供不同VSU的索引支持基于VSU的报文调度。当支持PVSF功能的节点接收到携带PVSF帧起始指示的首包,将该包以及后续属于相同PVSF的包放入空闲的时隙队列,如图14所示。
步骤5:支持PVSF功能的节点对于PVSF时隙队列的报文进行时隙化调度,调度可以采用静态配置的时隙交换表进行调度(即入向时隙和出现时隙根据VSU ID静态交换);也可以采用基于IPv6目的地址的路由转发调度;为了简化,本实施例以静态配置的时隙交换表调度为例。调度的结果是在出向仍然维持PVSF帧结构发送,由于静态配置时隙交换表,出向也需要最大缓存一个PVSF帧长,如图15所示。
步骤6:对于可能存在的多路PVSF帧汇聚的场景,出向根据支持的汇聚能力提供多路PVSF汇聚带宽能力,为了区分多路汇聚不同的PVSF帧信号,需要在支持多路PVSF帧汇聚的场景下增加PVSF ID区分不同汇聚的PVSF信号。通过对入向的多路PVSF队列的到达的VSU分组报文按照出向PVSF帧起始信号进行轮流调度,达到多路PVSF帧汇聚时按照VSU分组报文进行间插发送的效果,如图16所示。
(二)基于MPLS及定长分组报文实现确定性转发
针对于MPLS及定长的分组报文,可以通过以下步骤实现确定性转发。
步骤1:PVSF入口节点(即分组业务的感知节点或PVSF边界节点)基 于MPLS分组报文定义PVSF帧结构,每4个PVSF帧组成复帧循环,即该MPLS子接口下的PVSF帧ID取值为0-3。设计PVSF帧结构由4个PVC构成,每个PVSF帧内PVC编号为0-1,每个PVC定义4个VSU,VSU编号为0-3;每个VSU支持1个固定128Byte净荷的分组报文发送,PVSF总分组带宽分配为固定的10Gbps,因此,通过上述PVSF帧结构将保证每个VSU代表10Gbps/(4*4*4)=约156Mbps名义固定带宽,确定性业务可以请求1到多个VSU单位的资源,本实施例展示的固定VSU内分组单元的大小体现出模拟的电路交换特性。承载的确定新业务ID范围取值16bit,可以根据承载业务情况进行灵活分配,其帧结构示意如图17所示。
步骤2:PVSF入口节点基于MPLS标签(Label)设计PVSF开销信息,包括PVSF起始信号指示符2bit(00表示PVSF帧开始,10和01表示帧延续,11表示帧结束);PVSF ID用2bit表示,取值0-3;PVC ID用2bit表示,取值0-3;VSU ID用2bit表示,可用范围为0-3,结合PVSF/PVC ID,可以有64个可区分的VSU单位;为了兼容MPLS Label S指示信息TTL字段,设计15bit表示业务ID,不同的业务可以灵活接入到不同的VSU时隙,其开销设计如图18所示。
步骤3:PVSF入口节点根据PVSF帧结构设计,将MPLS分组报文映射到PVSF帧的虚拟时隙单元。假定分配给PVSF所在IP子接口的保证带宽为10Gbps。PVSF总共可支持64个VSU单位,每个VSU的名义带宽为10Gbps/(4*4*4)=156Mbps,考虑典型MPLS封装开销,固定128Byte净荷会增加约32Byte MPLS封装头,封装效率为80%,每VSU有效接入净荷约125Mbps。确定性业务可以请求1-64个VSU时隙单位,业务与时隙的映射关系可以是静态固定的,比如业务1固定映射到VSU ID=1;也可以是动态的映射,比如业务1请求1个VSU单位,即每PVSF中保证1个VSU机会提供给业务1,当业务1的流量到达,动态分配最近可用的VSU承载该业务,为了简便,本实施例假定采用静态固定映射方案。如果PVSF空闲,则PE 节点可主动插入一个携带PVSF帧起始信息的定位报文用于维持PVSF相对稳定的相位关系。同样,PE节点多业务接入时,根据PVSF时隙分配和映射算法提供多业务的接入。
步骤4:由于采用固定净荷大小即固定VSU分组单元的设计,PE节点增加变长MAC/IP业务映射到固定长度VSU分组单元的分片和重组处理,分片和重组方案可以沿用定长信元分片和重组处理过程。不同的处理在于定长信元分片后仍然保持PVSF帧开销信息,如图19所示。
步骤5:PVSF转发节点(即支持PVSF功能的转发节点)基于当前分组队列和调度机制,设计PVSF时隙队列和时隙化调度功能。简化设计可以每MPLS子接口设计2个PVSF时隙队列,每个队列最大长度为缓存1个PVSF帧,即最大64*160Byte=80kbit,队列中能提供不同VSU的索引支持基于VSU的报文调度。当支持PVSF功能的节点接收到携带PVSF帧起始指示的首包,将该包以及后续属于相同PVSF的包放入空闲的时隙队列。
步骤6:支持PVSF功能的节点对于PVSF时隙队列的报文进行时隙化调度,调度可以采用静态配置的时隙交换表进行调度(即入向时隙和出现时隙根据VSU ID静态交换);也可以采用基于IPv6目的地址的路由转发调度;为了简化,本实施例以静态配置的MPLS标签交换调度为例。调度的结果是在出向仍然维持PVSF帧结构发送,由于静态配置时隙交换表,出向也需要最大缓存一个PVSF帧长,如图20所示。
(三)基于SRv6分组报文实现确定性转发。
针对于SRv6分组报文,可以通过以下步骤实现确定性转发。
步骤1:PVSF入口节点(即分组业务的感知节点或PVSF边界节点)基于SRv6分组报文定义PVSF帧结构,以最简单的场景为例,每1个PVSF单帧循环,即该SRv6子接口下的PVSF帧ID取值固定为0或可以省去不用显式携带。设计PVSF帧结构由1个PVC构成,如果不考虑PVSF多路汇聚,可以不用额外的PVSF帧ID标识,此时PVC标识也固定为0或同样可以省 去不用显式携带,每个PVC定义2个VSU,每个VSU支持一个最长2000Byte的分组报文发送,VSU名义时隙大小为20us。承载的确定新业务ID范围取值16bit,可以根据承载业务情况进行灵活分配,其帧结构示意如图21所示。
步骤2:PVSF入口节点基于SRv6SID设计PVSF开销信息,包括PVSF起始信号指示符2bit(00表示PVSF帧开始,10和01表示帧延续,11表示帧结束);PVSF/PVC ID在本实施例中均省略;VSU ID用1bit表示,可用范围为0-1,因此本实施例只有2个可区分的VSU,体现出大带宽的特点;其开销设计如图22所示。
步骤3:PVSF入口节点根据PVSF帧结构设计,将分组报文映射到虚拟时隙单元。假定分配给PVSF所在SRv6子接口的保证带宽为10Gbps,PVSF总共可支持2个VSU单位,每个VSU的名义带宽为10Gbps/2=5Gbps。为了利用大带宽的特点,设计VSU名义时隙长度为20us用于控制转发的时延和抖动上下界,每VSU可以承载名义25000Byte,尤其适合高突发但是仍然要求时延和抖动有界的确定性业务。
步骤4:PVSF转发节点(即支持PVSF功能的转发节点)基于当前分组队列和调度机制,设计FVSF时隙队列和时隙化调度功能。简化设计可以每IP子接口设计2个PVSF时隙队列,每个队列最大长度为缓存1个PVSF帧,即最大2*25000Byte=400kbit。
步骤5:为了利用SRv6在源节点通过SRH指定转发路径的优势,PVSF转发节点将端到端的时隙化调度信息可以携带在SRH的SID开销信息中进行传递,如图22的开销格式。假定支持PVSF功能的SRv6节点采用SRv6可编程转发模式,则可以扩展一种新的SRv6可编程Function,为了简单区分,成为PVSF Function,该Function的功能为执行时隙化调度转发。首先控制面在该节点本地形成PVSF时隙化调度转发表,即入向的VSU时隙和出现VSU时隙的调度转发关系,然后根据当前接收到的DA中携带的PVSF VSU时隙信息,查找到出向PVSF VSU信息完成时隙转发;同时从SRH中获取下一跳 SRv6 PVSF节点需要处理的SID更新为新的DA,完成SRv6可编程转发处理。
本申请实施例还提供一种网络设备,包括处理器和通信接口,处理器用于实现上述分组报文传输方法100或分组报文传输方法1000,通信接口用于外部设备进行通信。该网络设备实施例与上述网络设备方法实施例对应,上述方法实施例的各个实施过程和实现方式均可适用于该网络侧设备实施例中,且能达到相同的技术效果。
具体地,本申请实施例还提供了一种网络侧设备。如图23所示,该网络侧设备2300包括:处理器2301、网络接口2302和存储器2303。其中,网络接口2302例如为通用公共无线接口(common public radio interface,CPRI)。
具体地,本发明实施例的网络侧设备2300还包括:存储在存储器2303上并可在处理器2301上运行的指令或程序,处理器2301调用存储器2303中的指令或程序执行上述的分组报文传输方法,并达到相同的技术效果,为避免重复,故不在此赘述。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述分组报文传输方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的终端中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述分组报文传输方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。
本申请实施例另提供了一种计算机程序/程序产品,所述计算机程序/程序产品被存储在存储介质中,所述计算机程序/程序产品被至少一个处理器执行以实现上述分组报文传输方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (26)

  1. 一种分组报文传输方法,由第一网络设备执行,所述方法包括:
    获取待发送的至少一个分组报文;
    将所述至少一个分组报文配置到第一分组虚拟时隙帧发送。
  2. 根据权利要求1所述的方法,其中,所述第一分组虚拟时隙帧包括至少一个采用时分复用结构的分组虚拟容器,所述分组虚拟容器包括至少一个虚拟时隙单元,所述虚拟时隙单元包括至少一个最小时隙单元,一个所述最小时隙单元用于承载所述至少一个分组报文的一个分组信息单元。
  3. 根据权利要求2所述的方法,其中,将所述至少一个分组报文配置到第一分组虚拟时隙帧发送,包括:
    将所述至少一个分组报文静态或动态配置到与所述至少一个分组报文的业务相关信息对应的目标虚拟时隙单元发送,其中,所述目标虚拟时隙单元为所述第一分组虚拟时隙帧的一个虚拟时隙单元,所述业务相关信息包括以下至少之一:业务类型、业务标识、业务优先级、业务所需带宽、时延、抖动、丢包率。
  4. 根据权利要求2所述的方法,其中,将所述至少一个分组报文配置到第一分组虚拟时隙帧发送,包括:
    将所述至少一个分组报文配置到所述第一分组虚拟时隙帧的目标虚拟时隙单元发送,其中,所述目标虚拟时隙单元为最近的可用的待发送虚拟时隙单元。
  5. 根据权利要求3或4所述的方法,其中,将所述至少一个分组报文配置到第一分组虚拟时隙帧发送,包括:
    将所述目标虚拟时隙单元的时隙开销携带在所述至少一个分组报文的预定位置,其中,所述时隙开销包括所述第一分组虚拟时隙帧的帧开销信息,所述帧开销信息包括:帧结构指示符和帧标识符;
    将携带有所述时隙开销的至少一个分组报文配置在所述目标虚拟时隙单 元中发送。
  6. 根据权利要求5所述的方法,其中,所述帧开销信息还包括以下至少之一:所述至少一个分组报文的业务相关信息、维护诊断信息、管理信息、所述目标虚拟时隙单元所属的分组虚拟容器的指示信息、所述目标虚拟时隙单元的指示信息。
  7. 根据权利要求5所述的方法,其中,将携带有所述时隙开销的至少一个分组报文配置在所述目标虚拟时隙单元中发送,包括:
    在所述至少一个分组报文中的一个分组报文的业务数据中增加所述时隙开销后形成的第一报文,将所述第一报文配置到所述目标虚拟时隙单元的一个最小时隙单元中发送;或者,
    将所述至少一个分组报文的业务数据按照预定长度进行转换后形成的第二报文,将所述第二报文配置到所述目标虚拟时隙单元的一个最小时隙单元中发送,所述预定长度为不超过一个最小时隙单元所能承载的数据大小。
  8. 根据权利要求3所述的方法,其中,在将所述至少一个分组报文配置到与所述至少一个分组报文的属性信息对应的目标虚拟时隙单元发送之前,所述方法还包括:
    根据预设信息,确定以下至少之一:
    所述第一分组虚拟时隙帧的结构;
    分组报文与所述第一分组虚拟时隙帧的虚拟时隙单元的映射关系,所述映射关系包括分组报文的业务相关信息与虚拟时隙单元的对应关系;
    所述第一分组虚拟时隙帧的发送方式,所述第一分组虚拟时隙帧的发送方式包括:连续发送的分组虚拟时隙帧的数量n、两组连续发送的n个分组虚拟时隙帧之间的间隔;
    其中,所述预设信息包括以下至少之一:所述第一网络设备的能力信息、承载的业务的需求、网络环境。
  9. 根据权利要求3或4所述的方法,其中,所述获取待发送的至少一个 分组报文,包括:
    感知接入所述第一网络设备的分组业务的所述至少一个分组报文;或者,
    接收第二网络设备发送的分组业务流,从所述分组业务流中获取所述至少一个分组报文。
  10. 根据权利要求9所述的方法,其中,将所述至少一个分组报文配置到第一分组虚拟时隙帧发送,包括:
    将所述至少一个分组报文缓存到时隙队列,其中,所述时隙队列的结构与所述第一分组虚拟时隙帧的结构对应;
    在所述时隙队列缓存的分组报文的数量达到第一门限的情况下,获取所述时隙队列中的分组报文,将获取的所述分组报文配置在所述第一分组虚拟时隙帧中发送。
  11. 根据权利要求10所述的方法,其中,所述第一门限根据第一时延和第一抖动确定,其中,所述第一时延为所述至少一个分组报文的时延中的最大值,所述第一抖动为所述至少一个分组报文的抖动中的最大值。
  12. 根据权利要求2所述的方法,其中,所述获取待发送的至少一个分组报文,包括:
    接收到第二分组虚拟时隙帧的帧起始指示符或代表第二分组虚拟时隙帧的帧头位置的第一分组报文;
    从所述帧起始指示符或所述第一分组报文开始,将接收到的分组报文缓存到空闲的时隙队列,直至满足预设条件,其中,所述预设条件包括以下之一:接收到正确的帧结束指示符;所述时隙队列为满;接收到预期代表新的帧的帧头位置的第二分组报文;接收到预期新的帧的帧起始指示符;所述时隙队列的结构与所述第二分组虚拟时隙帧的结构相应。
  13. 根据权利要求12所述的方法,其中,在接收到第二分组虚拟时隙帧的帧起始指示符或代表第二分组虚拟时隙帧的帧头位置的第一分组报文之后,所述方法还包括:
    根据所述第二分组虚拟时隙帧的帧头位置的第一到达时刻,预估所述第二分组虚拟时隙帧后的下一个分组虚拟时隙帧的帧头位置的第二到达时刻,并在所述第二到达时刻的预设范围内,检测所述下一个分组虚拟时隙帧的帧头位置的分组报文。
  14. 根据权利要求13所述的方法,其中,在预估所述第二分组虚拟时隙帧后的下一个分组虚拟时隙帧的帧头位置的第二到达时刻之后,所述方法还包括:
    在所述第二到达时刻的预设范围之内未检测到下一个分组虚拟时隙帧的帧头位置的分组报文到达的情况下,或者,在所述第二到达时刻的预设范围之外,检测到所述下一个分组虚拟时隙帧的帧头位置的分组报文到达的情况下,确定所述下一个分组虚拟时隙帧出现异常。
  15. 根据权利要求13所述的方法,其中,所述时隙队列包括:分组虚拟时隙帧队列;所述时隙队列还包括以下之一:从属于所述分组虚拟时隙帧队列的至少一个分组虚拟容器队列、从属于所述分组虚拟容器队列的至少一个虚拟时隙单元队列,其中,不同的分组虚拟容器承载的分组报文缓存到对应的分组虚拟容器队列,不同的虚拟时隙单元承载的分组报文缓存到对应的虚拟时隙单元队列。
  16. 根据权利要求12所述的方法,其中,所述将接收到的分组报文缓存到空闲的时隙队列,包括:
    根据接收到的分组报文携带的帧开销信息,将接收到的所述分组报文缓存到所述时隙队列中与所述帧开销信息对应位置。
  17. 根据权利要求12所述的方法,其中,将所述至少一个分组报文配置到第一分组虚拟时隙帧发送,包括:
    在所述时隙队列中缓存的分组报文的数量达到第二门限的情况下,调度所述时隙队列中的分组报文至所述第一分组虚拟时隙帧发送。
  18. 根据权利要求17所述的方法,其中,所述第二门限小于或等于预设 值,所述预设值为一个所述第一分组虚拟时隙帧的帧长或一个所述第一分组虚拟时隙帧允许的最大字节总数。
  19. 根据权利要求17所述的方法,其中,所述第一分组虚拟时隙帧的第一帧起始位置与所述第二分组虚拟时隙帧的第二帧起始位置之间的相位差不大于预设阈值。
  20. 根据权利要求17所述的方法,其中,所述调度所述时隙队列中分组报文至所述第一分组虚拟时隙帧发送,包括:
    根据所述时隙队列中的分组报文所属的第一分组虚拟容器和/或第一分组时隙单元,将所述分组报文调度到所述第一分组虚拟时隙帧的第二分组虚拟容器和/或第二分组时隙单元,其中,所述第一分组虚拟容器和/或第一组分时隙单元与所述第二分组虚拟容器和/或第二分组时隙单元具有交换关系;或者,
    按照不同目标分组报文在所述第二分组虚拟时隙帧中的相对位置,将所述时隙队列中的目标分组报文调度至所述第一分组虚拟时隙帧发送,其中,不同目标分组报文之间在所述第一分组虚拟时隙帧中的相对位置与在所述第二分组虚拟时隙帧中的相对位置相同,不同所述目标分组报文的目的地址相同。
  21. 根据权利要求17所述的方法,其中,所述第二分组虚拟时隙帧和所述第一分组虚拟时隙帧与相同的同步基准对齐,或者所述第二分组虚拟时隙帧与所述第一分组虚拟时隙帧之间存在稳定的相位偏差。
  22. 根据权利要求17所述的方法,其中,所述调度所述时隙队列中分组报文至所述第一分组虚拟时隙帧发送,包括:
    在多个所述时隙队列中缓存的分组报文的数量达到第二门限的情况下,采用间插轮转的调度方式,将多个所述时隙队列中的分组报文调度至所述第一分组虚拟时隙帧发送,其中,所述间插轮转的调度方式是在依次调度各个所述时隙队列缓存的第一个缓存单元,再依次调度各个所述时隙队列缓存的 下一个缓存单元,直至调度完各个所述时隙队列中缓存的缓存单元,一个缓存单元缓存一个虚拟时隙单元承载的分组报文。
  23. 根据权利要求17所述的方法,其中,在调度所述时隙队列中的分组报文至第一分组虚拟时隙帧发送的出口速率与缓存分组报文至所述时隙队列的入口速率不同的情况下,所述第一分组虚拟时隙帧的各个虚拟时隙单元的大小与所述第二分组虚拟时隙帧的大小不完全相同,且所述第一分组虚拟时隙帧的帧结构与所述第二分组虚拟时隙帧的帧结构相同,所述第一分组虚拟时隙帧允许的分组报文数或字节总数与所述第二分组虚拟时隙帧允许的分组报文数或字节总数相同。
  24. 一种分组报文传输方法,由第三网络设备执行,所述方法包括:
    接收第一网络设备通过第一分组虚拟时隙帧发送的至少一个分组报文;
    基于所述至少一个分组报文的到达时间,以分组业务流的方式发送接收到的所述至少一个分组报文。
  25. 一种网络设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至24任一项所述的分组报文传输方法的步骤。
  26. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至24任一项所述的分组报文传输方法的步骤。
PCT/CN2023/108898 2022-09-05 2023-07-24 分组报文传输方法、网络设备及可读存储介质 WO2024051367A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211079174.2 2022-09-05
CN202211079174.2A CN117692402A (zh) 2022-09-05 2022-09-05 分组报文传输方法、网络设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2024051367A1 true WO2024051367A1 (zh) 2024-03-14

Family

ID=90128817

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/108898 WO2024051367A1 (zh) 2022-09-05 2023-07-24 分组报文传输方法、网络设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN117692402A (zh)
WO (1) WO2024051367A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253741A (zh) * 2005-07-22 2008-08-27 恩尼格玛半导体有限公司 交换设备中的高效报文交换
US20140355594A1 (en) * 2013-05-31 2014-12-04 Hughes Network Systems, Llc Virtual data framing for data packet transmission over channels of a communications network
US20190230678A1 (en) * 2017-11-22 2019-07-25 Pusan National University Industry-University Cooperation Foundation Scheduling apparatus and method using virtual slot frame in industrial wireless network
CN111919492A (zh) * 2018-04-04 2020-11-10 Abb瑞士股份有限公司 工业无线网络中的信道访问

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253741A (zh) * 2005-07-22 2008-08-27 恩尼格玛半导体有限公司 交换设备中的高效报文交换
US20140355594A1 (en) * 2013-05-31 2014-12-04 Hughes Network Systems, Llc Virtual data framing for data packet transmission over channels of a communications network
US20190230678A1 (en) * 2017-11-22 2019-07-25 Pusan National University Industry-University Cooperation Foundation Scheduling apparatus and method using virtual slot frame in industrial wireless network
CN111919492A (zh) * 2018-04-04 2020-11-10 Abb瑞士股份有限公司 工业无线网络中的信道访问

Also Published As

Publication number Publication date
CN117692402A (zh) 2024-03-12

Similar Documents

Publication Publication Date Title
US10554542B2 (en) Label distribution method and device
WO2021180073A1 (zh) 报文传输方法、装置、网络节点及存储介质
US20190116126A1 (en) Network Congestion Control Method, Device, and System
US6907042B1 (en) Packet processing device
EP1585261B1 (en) Apparatus and method for processing labeled flows in a communications access network
WO2021098807A1 (zh) 一种更新映射关系的方法以及装置
US20210211937A1 (en) Data Transmission Method, Apparatus, and System
WO2021036962A1 (zh) 一种业务报文传输的方法及设备
WO2023283902A1 (zh) 一种报文传输方法及装置
JP2003518817A (ja) パケットスケジューリングを用いるネットワーク交換方法
US20210359931A1 (en) Packet Scheduling Method, Scheduler, Network Device, and Network System
EP4152703A1 (en) Network control method and device
WO2012155867A1 (zh) 一种报文发送方法及接入控制器
CN107770085B (zh) 一种网络负载均衡方法、设备及系统
WO2011140945A1 (zh) 一种业务数据传输方法及装置
CN101848168B (zh) 基于目的mac地址的流量控制方法、系统及设备
WO2009152711A1 (zh) 实现优先级互通的方法及装置
WO2012159461A1 (zh) 一种二层路径最大传输单元发现方法和节点
US8135005B2 (en) Communication control system, communication control method, routing controller and router suitably used for the same
WO2021208529A1 (zh) 端口资源预留方法、电子设备及存储介质
WO2024051367A1 (zh) 分组报文传输方法、网络设备及可读存储介质
WO2022117047A1 (zh) 通信方法及装置
KR20170041036A (ko) 효율적 데이터 전송을 위한 네트워크 시스템 및 데이터 전송 방법
JP2004241835A (ja) 品質保証型データストリームを転送するための受付判定方法、閉域ip網、そのプログラム
US20140376443A1 (en) Method and apparatus for providing voice service in communication system