WO2023151483A1 - 数据帧的处理方法和装置 - Google Patents

数据帧的处理方法和装置 Download PDF

Info

Publication number
WO2023151483A1
WO2023151483A1 PCT/CN2023/073972 CN2023073972W WO2023151483A1 WO 2023151483 A1 WO2023151483 A1 WO 2023151483A1 CN 2023073972 W CN2023073972 W CN 2023073972W WO 2023151483 A1 WO2023151483 A1 WO 2023151483A1
Authority
WO
WIPO (PCT)
Prior art keywords
pbs
shared
group
data
frame
Prior art date
Application number
PCT/CN2023/073972
Other languages
English (en)
French (fr)
Inventor
刘翔
苏伟
郑述乾
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210429110.4A external-priority patent/CN116633482A/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023151483A1 publication Critical patent/WO2023151483A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems

Definitions

  • the present application relates to the field of optical transport network, in particular to a data frame processing method and device.
  • optical transport network As a core technology of a backbone bearer network, optical transport network (OTN) includes optical bearer containers of various rates.
  • the optical data unit 0 (optical data unit0, ODU0) frame is the bearer container with the lowest rate in the current OTN technology, and its rate is about 1.25 gigabit per second (Gbps), which is used to carry 1Gbps Ethernet services data.
  • Gbps gigabit per second
  • the optical bearer container of the current OTN adopts time division multiplexing technology. Specifically, a high-rate bearer container is divided into multiple fixed-sized payload blocks (payload blocks, PBs) to implement multi-service bearing.
  • PBs payload blocks
  • the present application provides a data frame processing method and device, which can reduce processing delay or improve transmission efficiency by allowing multiple services to share one PB.
  • the first aspect of the present application provides a data frame processing method.
  • the data frame processing method can be applied to a data frame processing device.
  • the data frame processing device may be an OTN device or a metro transport network (metro transport network, MTN) device or the like.
  • the OTN equipment will be used as an example for description later.
  • the processing method of the data frame includes the following steps: the OTN equipment acquires multiple service data.
  • the OTN equipment maps multiple service data into multiple service frames respectively.
  • the service frame can be an OSU frame or other data frames similar to the OSU frame structure.
  • the OTN device maps multiple service frames to M groups of payload blocks (payload block, PB) in the payload area of N data frames. Wherein, each group of PBs may also be referred to as a P frame.
  • PB payload block
  • the data frame may be an OTN frame, a flexible Ethernet (flexible ethernet, FlexE) frame or an MTN frame.
  • M and N are integers greater than 0.
  • Each group of PBs includes R ⁇ C PBs. R and C are integers greater than 1.
  • Each PB is S1 bytes in size. The size of each payload area in the N data frames occupied by PB is S2 bytes.
  • M ⁇ R ⁇ C ⁇ S1 N ⁇ S2.
  • Each group of PBs includes group C1 shared PBs.
  • C1 is a positive integer less than or equal to C.
  • Each set of shared PBs includes R1 shared PBs.
  • R1 is a positive integer less than or equal to R.
  • Each of the R1 shared PBs is used to bear data of multiple services. The number of multiple businesses is greater than 1. The number of multiple transactions is less than or equal to R. Send the N data frames.
  • one PB can be collected in advance, thereby reducing processing delay or improving transmission efficiency.
  • the size of the M P frames is the same as that of the N S2s. Therefore, when the OTN device performs periodic data processing with N data frames as the target period, the PB division in the first data frame and the N+1th data frame may be the same.
  • the PB at the same position in the first data frame and the N+1th data frame carries the data of the same group of services.
  • the OTN device can verify the accuracy of the data frame according to this feature. Therefore, the present application can improve the reliability of processing data frames.
  • each group of PBs further includes group C2 exclusive PBs.
  • the sum of C1 and C2 equals C.
  • Each group of exclusive PBs includes R exclusive PBs. R exclusive PB data belonging to the same business.
  • the M groups of PBs include a P1 group of PBs and a P2 group of PBs.
  • P1 and P2 are integers greater than 0.
  • the sum of P1 and P2 is equal to M.
  • Each group of PBs in group P1 includes overhead PBs of group C3, exclusive PBs of group C4 and shared PBs of group C1.
  • Each group of exclusive PBs in the C4 group of exclusive PBs includes R exclusive PBs.
  • Each group of exclusive PBs is used to carry data belonging to the same service.
  • Each group of overhead PBs in the C3 group of overhead PBs includes R overhead PBs.
  • the R overhead PBs are used to carry information related to multiple service data.
  • C3 and C4 are integers greater than 0.
  • Each group of PBs in the P2 group also includes the exclusive PBs of the C2 group.
  • the sum of C1 and C2 equals C.
  • Each group of exclusive PBs includes R exclusive PBs.
  • T (P1 ⁇ C3) ⁇ (M ⁇ C).
  • the value range of T is between 0.001 and 0.1.
  • T can be equal to 0.001 or 0.1.
  • the transmission rate of each of the plurality of data is less than 11 megabits per second (million bits per second, Mbps). And/or, the transmission rate of each of the R exclusive PBs is greater than 100 Mbps.
  • K is the least common multiple of K1 and S2.
  • the values of N and M are the smallest.
  • the target period is minimum.
  • the shorter the target period is, the shorter the time the OTN equipment can adjust and add or delete the transmitted services. Therefore, the present application can improve the dynamic adjustment capability of processing data frames.
  • the N data frames include a shared identifier.
  • the shared identifier is used to mark the shared PB of group C1.
  • the receiver can receive N data frames.
  • the receiving end can process the exclusive PB and the shared PB in different ways according to the sharing identifier. For example, for an exclusive PB, the receiving end may directly map an exclusive PB into an optical service unit (optical service unit, OSU) frame.
  • OSU optical service unit
  • the receiver processes the remaining R1-1 shared PBs.
  • the receiving end combines the data in the R1 shared PBs.
  • the receiving end maps the combined data into multiple OSU frames of multiple services. Therefore, by adding the shared identifier, the reliability of the transmitted data frame can be improved.
  • the N data frames are OTN frames.
  • S2 is 4 ⁇ 3808, and R1 is equal to R.
  • R is 12.
  • C is 10.
  • the size S1 of each PB is 192.
  • M is 119.
  • N is 180.
  • R is an integer multiple of 17.
  • C is 7.
  • R is 17.
  • the size S1 of each PB is 192.
  • M is 2.
  • N is 3.
  • R is 34.
  • the size S1 of each PB is 192.
  • M is 1.
  • N is 3.
  • R is 68.
  • the size S1 of each PB is 192.
  • M is 1.
  • N is 6.
  • the N data frames are OTN frames.
  • the size of each payload area is 4 ⁇ 3808.
  • S2 is 15168 bytes.
  • Each payload area also includes a 64-byte overhead field. Among them, by sharing PB, it will increase the difficulty of PB management. Therefore, the reliability of processing data frames can be improved by adding overhead fields.
  • R1 is equal to the R.
  • R is 12.
  • C is 10.
  • the size S1 of each PB is 192.
  • M is 79.
  • N is 120.
  • the overhead field includes identifiers of multiple services. Multiple data of multiple services are carried in the shared PB. Therefore, the OTN device can associate multiple data through identifiers of multiple services, thereby improving the reliability of processing data frames.
  • S S1 ⁇ R.
  • S is equal to 8.
  • the size of one sub-PB is 64 bits.
  • the size of a sub-PB is equal to the size of a data block in 64b/66b encoding in Ethernet service transmission. Therefore, the P frame provided in this application adapts to the transmission of Ethernet services, thereby reducing service delay and processing complexity.
  • S2 is 4 ⁇ 3808.
  • S1 equals 192.
  • R1 is equal to R.
  • R is 24.
  • C is 12.
  • S2 is 4 ⁇ 3808.
  • S1 equals 192.
  • R1 is equal to R.
  • R is 24.
  • C is 10.
  • S2 is 4 ⁇ 3808.
  • S1 equals 240.
  • R1 is equal to R.
  • R is 30.
  • C is 12.
  • the difference between R1 and R is X.
  • Each set of shared PBs also includes X overhead PBs.
  • X is an integer greater than 0.
  • each of the R1 shared PBs further includes a management field.
  • the management field is used to operate, manage and maintain OAM. Among them, by allowing multiple services to share a shared PB, it will increase the difficulty of managing the shared PB. Therefore, the reliability of processing data frames can be improved by adding management fields in the shared PB.
  • the size of the management field is the same as the size of each of the data of multiple services.
  • the size of the management field is convenient for the OTN equipment to manage and share PB.
  • the second aspect of the present application provides a data frame processing device.
  • the data frame processing device includes a processor and a transceiver.
  • the processor is configured to execute the method described in the foregoing first aspect or any one of the implementation manners of the first aspect to obtain N data frames.
  • the transceiver is used to send N data frames.
  • the third aspect of the present application provides a data frame processing method.
  • the data frame processing method can be applied to a data frame processing device.
  • the data frame processing device may be an OTN device or other devices.
  • the OTN equipment will be used as an example for description later.
  • the data frame processing method includes the following steps: the OTN equipment obtains N data frames.
  • the payload area of N data frames includes M groups of PBs. M and N are integers greater than 0.
  • Each group of PBs includes R ⁇ C PBs. R and C are integers greater than 1.
  • Each PB is S1 bytes in size.
  • the size of each payload area in N OPU frames occupied by PB is S2 bytes.
  • M ⁇ R ⁇ C ⁇ S1 N ⁇ S2.
  • Each group of PBs includes group C1 shared PBs.
  • C1 is a positive integer less than or equal to C.
  • Each set of shared PBs includes R1 shared PBs.
  • R1 is a positive integer less than or equal to R.
  • Each of the R1 shared PBs is used to bear data of multiple services. The number of multiple businesses is greater than 1. The number of multiple transactions is less than or equal to R.
  • the OTN equipment maps M groups of PBs in N data frames to multiple service frames.
  • the fourth aspect of the present application provides a data frame processing device.
  • the data frame processing device includes a processor and a transceiver.
  • the transceiver is used to receive N data frames.
  • the processor is configured to execute the method described in the aforementioned third aspect to obtain multiple service frames.
  • the fifth aspect of the present application provides a computer storage medium, which is characterized in that instructions are stored in the computer storage medium, and when the instructions are executed on the computer, the computer executes the computer according to the first aspect or any of the first aspects.
  • the sixth aspect of the present application provides a computer program product, which is characterized in that, when the computer program product is executed on a computer, the computer executes the method described in the first aspect or any implementation manner of the first aspect ;or The computer is made to execute the method as described in the third aspect.
  • FIG. 1 is a schematic structural diagram of the OTN provided by the present application.
  • FIG. 2 is a schematic structural diagram of the OTN equipment provided by the present application.
  • Fig. 3 is the schematic diagram that the OSU frame provided by the present application is mapped to the OTN frame
  • FIG. 4 is a first schematic flowchart of a data frame processing method provided in an embodiment of the present application.
  • FIG. 5 is a first structural schematic diagram of a P frame provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the first structure of the exclusive PB and the shared PB provided by the embodiment of the present application;
  • FIG. 7 is a first schematic structural diagram of mapping an OSU frame to a sub-PB provided by an embodiment of the present application.
  • FIG. 8 is a second schematic structural diagram of a P frame provided by an embodiment of the present application.
  • FIG. 9 is a second structural schematic diagram of an exclusive PB and a shared PB provided by the embodiment of the present application.
  • FIG. 10 is a second schematic structural diagram of mapping an OSU frame into a sub-PB provided by an embodiment of the present application.
  • FIG. 11 is a third schematic structural diagram of a P frame provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of the payload area provided by the embodiment of the present application.
  • FIG. 13 is a fourth schematic structural diagram of a P frame provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a P frame including an overhead PB column provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a shared PB provided by an embodiment of the present application.
  • FIG. 16 is a second schematic flowchart of a data frame processing method provided by an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of a processing device for providing data frames according to an embodiment of the present application.
  • a plurality refers to two or more.
  • “And/or” describes the association relationship of associated objects, and there may be three kinds of relationships.
  • a and/or B may mean: A exists alone, A and B exist simultaneously, and B exists alone.
  • terms such as “first”, “second”, “exclusive”, and “shared” are only used to distinguish the purpose of description, and cannot be understood as indicating or implying relative importance, nor can they be To be understood as indicating or implying a sequence. Differentiating terms like “exclusive” and “shared” could also be substituted for first or second.
  • mapping an optical service unit (OSU) frame into an OTN frame refers to encapsulating an OSU frame or an OSU signal into an OTN frame.
  • OSU optical service unit
  • FIG. 1 is a schematic structural diagram of the OTN provided by the present application.
  • the OTN 100 is composed of eight OTN devices 101 , that is, OTN devices AH.
  • 102 indicates an optical fiber for connecting two devices.
  • 103 instruction customer business Service interface for receiving or sending customer service data.
  • OTN 100 is used to transmit service data for client equipment 1-3.
  • the customer equipment is connected to the OTN equipment through the customer service interface.
  • client devices 1-3 are connected to OTN devices A, H and F respectively.
  • an OTN device may have different functions.
  • OTN equipment is divided into optical layer equipment, electrical layer equipment, and optoelectronic hybrid equipment.
  • Optical layer equipment refers to equipment capable of processing optical layer signals, such as: optical amplifier (optical amplifier, OA), optical add-drop multiplexer (optical add-drop multiplexer, OADM).
  • the OA can also be called an optical line amplifier (OLA), which is mainly used to amplify the optical signal to support the transmission of a longer distance under the premise of ensuring the specific performance of the optical signal.
  • OVA optical line amplifier
  • the OADM is used to transform the space of the optical signal so that it can be output from different output ports (also called directions sometimes).
  • Electrical-layer devices refer to devices that can process electrical-layer signals, for example, devices that can process OTN signals.
  • Optical hybrid equipment refers to equipment capable of processing optical layer signals and electrical layer signals. It should be noted that, according to specific integration requirements, an OTN device can integrate multiple different functions. The technical solutions provided by this application are applicable to OTN devices with different forms and integration levels including electrical layer functions.
  • the data frame structure used by the optical transmission device in the embodiment of the present application may be an OTN frame.
  • OTN frames are used to carry various service data and provide rich management and monitoring functions.
  • the OTN frame can be an optical data unit frame (optical data unit k, ODUk), ODUCn, ODUflex, or an optical channel transmission unit k (optical transport unit k, OTUk), OTUCn, or a flexible OTN (FlexO) frame, etc.
  • the difference between the ODU frame and the OTU frame is that the OTU frame includes the ODU frame and the OTU overhead.
  • Cn represents a variable rate, specifically a rate that is a positive integer multiple of 100 Gbps.
  • the ODU frame refers to any one of ODUk, ODUCn or ODUflex
  • the OTU frame refers to any one of OTUk, OTUCn or FlexO. It should also be pointed out that with the development of optical transport network technology, new types of OTN frames may be defined, which are also applicable to this application. In addition, the method disclosed in this application can also be applied to other OTN frames such as FlexE frames.
  • FIG. 2 is a schematic structural diagram of an OTN device provided by the present application.
  • the OTN device 200 may be any of the OTN devices A-H in FIG. 1 .
  • the OTN device 200 includes a tributary board 201 , a cross-connect board 202 , a line board 203 , an optical layer processing board (not shown in the figure), and a system control and communication board 204 .
  • the tributary board 201 , the cross board 202 and the circuit board 203 are used for processing electrical layer signals.
  • the tributary board 201 is used to realize the receiving and sending of various customer services, such as SDH service, packet service, Ethernet service and/or fronthaul service and so on.
  • the tributary board 201 may be divided into a client-side optical transceiver module and a signal processor.
  • the client-side optical transceiver module may also be called an optical transceiver, and is used for receiving and/or sending service data.
  • the signal processor is used to realize the mapping and de-mapping processing of business data to data frames.
  • the cross-connect board 202 is used to realize the exchange of data frames, and complete the exchange of one or more types of data frames.
  • the circuit board 203 mainly implements the processing of data frames on the line side. Specifically, the circuit board 203 can be divided into a line-side optical module and a signal processor. Wherein, the line-side optical module may be called an optical transceiver, and is used for receiving and/or sending data frames.
  • the signal processor is used to implement multiplexing and demultiplexing, or mapping and demapping processing of data frames on the line side.
  • the system control and communication board 204 is used to implement system control. Specifically, information may be collected from different boards, or control instructions may be sent to corresponding boards.
  • FIG. 2 is only an example of the OTN equipment provided in this application. According to specific requirements, the type and number of boards included in the OTN equipment may be different. For example, an OTN device serving as a core node does not have a tributary board 201 . For another example, an OTN device serving as an edge node has multiple tributary boards 201 or no optical cross-connect board 202 . For another example, an OTN device that only supports electrical layer functions may not have an optical layer processing board.
  • the present application uses the OTN as an example to describe the method provided in the present application.
  • the service frame can be an OSU frame.
  • the data frame can be an OTN frame or an OPU frame.
  • the process of mapping the OSU frame to the OTN frame by the OTN device is described below as an example.
  • FIG. 3 is a schematic diagram of mapping an OSU frame to an OTN frame provided in the present application.
  • an OTN frame 302 is a schematic representation of an optical transport network frame.
  • the OTN frame 302 has a structure of 4 rows and multiple columns.
  • the OTN frame 302 includes an overhead area, a payload area and a forward error correction (forward error correction, FEC) area.
  • FEC forward error correction
  • the payload area is divided into multiple payload blocks (payload blocks, PB).
  • PB payload blocks
  • Each PB occupies a position of a fixed length (also referred to as a size) in the payload area, for example, 192 or 128 bytes.
  • PB payload blocks
  • Each PB occupies a position of a fixed length (also referred to as a size) in the payload area, for example, 192 or 128 bytes.
  • OTN frame 302 is only one example. Other deformed OTN frames are also suitable for this application.
  • an OTN frame that does not contain an FEC area.
  • the frame structure has a different number of rows and columns than the OTN frame 302 .
  • a PB may also be called a time slot, a time slot block, or a time slice. This application is not bound by its name.
  • the payload area of the OTN frame may not be divided into an integer number of PBs. At this time, a part of some PBs is in the payload area of one OTN frame, and another part of the PBs is in the payload area of another OTN frame.
  • OSU frame 301 includes an overhead area and a payload area.
  • the overhead area of the OSU frame 301 is used to carry overhead information.
  • the overhead information may include a service identifier (service identifier, SID), a trail trace indicator (trail trace identifier, TTI) or a bit-interleaved parity (bit-interleaved parity, BIP), etc.
  • the payload area of the OSU frame 301 is used to carry service data.
  • the rate of an OSU frame is defined as an integer multiple of the base rate. Wherein, the reference rate may be 2.6 Mbps, 5.2 Mbps or 10.4 Mbps or multiples of the aforementioned values. It should be understood that the OSU frame structure shown in FIG. 3 is just an example. In other specific implementations, the OSU frame may also be a data structure including overhead subframes and payload subframes. In this regard, this application does not make a limitation.
  • the OSU frame is mapped to the payload area of the OTN frame. Specifically, OSU frames are mapped into PBs of OTN frames. In one possible implementation, one OSU frame is mapped into one PB. In another possible implementation, one OSU frame is mapped into multiple PBs. In this regard, this application does not make a limitation. To simplify the description, the following embodiments take the mapping of one OSU frame into one PB as an example. It should be understood that the subsequent embodiments are also applicable to the case where one OSU frame is mapped to multiple PBs. The modification of the technical solution for the latter also belongs to the protection scope of the present application.
  • PBs in an OTN frame are defined as a transmission period.
  • the PB block is allocated for the OSU frame with the transmission cycle as the basic unit. For example, assuming that OSU frames and PBs have the same size and rate, 10 OSU frames carrying service data of the same service may occupy PBs numbered 0-9 in a transmission cycle including 20 PBs.
  • an OSU frame carrying the same service data is called an OSU signal.
  • An OSU signal is a bit stream carrying a service data, and the frame format of the bit stream is the frame format of an OSU frame.
  • An OSU signal can include one or more OSU frames.
  • FIG. 4 is a first schematic flowchart of a data frame processing method provided by an embodiment of the present application. As shown in Fig. 4, the method for processing the data frame includes the following steps.
  • an OTN device acquires multiple service data.
  • Service data refers to services that can be carried by the optical transport network. For example, it may be an Ethernet service, a packet service, a wireless backhaul service, and the like.
  • the OTN device maps multiple service data to multiple service frames respectively.
  • the service frame can be an OSU frame or other data frames with a frame structure similar to the OSU frame.
  • description will be made by taking the service frame as an OSU frame as an example.
  • the OTN device maps multiple service frames to M groups of payload blocks PB in the payload area of N data frames.
  • the data frame can be an OTN frame, a FlexE frame, or an MTN frame, etc.
  • description will be made by taking the data frame as an OTN frame as an example.
  • the OTN equipment maps multiple OSU frames to the payload areas of N OTN frames.
  • Each OTN frame includes a payload area.
  • N OTN frames include N payload areas.
  • the OTN device will send these data frames to complete the transmission of service data.
  • N payload areas include M groups of PBs.
  • M and N are integers greater than 0.
  • Each group of PBs includes R ⁇ C PBs.
  • R ⁇ C may refer to R rows and C columns.
  • Each group of PBs can be used as a transmission cycle of the OTN equipment. At this time, each group of PBs may also be called a P frame.
  • R and C are integers greater than 1.
  • R and C can be in various combinations. For example, Table 1 shows examples of several combinations of R and C provided in the embodiments of the present application. It should be understood that in practical applications, those skilled in the art can combine R and C according to requirements.
  • each group of PBs the size of each PB is S1 bytes.
  • S1 can be 128, 192 or 240 etc.
  • the size of each payload area in N OTN frames occupied by PB is S2 bytes.
  • S2 is 4 ⁇ 3808.
  • Each group of PBs includes group C1 shared PBs.
  • C1 is a positive integer less than or equal to C.
  • Each set of shared PBs includes R1 shared PBs.
  • R1 is a positive integer less than or equal to R.
  • Each of the R1 shared PBs includes multiple data of multiple services. The number of multiple data is greater than 1. The number of multiple data is less than or equal to R.
  • the processing delay can be reduced or the transmission efficiency can be improved.
  • M ⁇ R ⁇ C ⁇ S1 N ⁇ S2, that is, the size of M P frames and the size of N S2 is the same size. Therefore, when the OTN device performs periodic data frame processing with N OTN frames as the target period, the PB division in the first data frame and the N+1th data frame may be the same.
  • the PB at the same position in the first data frame and the N+1th data frame carries the data of the same service. OTN equipment can verify the accuracy of data frames based on this feature. Therefore, the present application can improve the reliability of processing data frames.
  • R, C, S2, and S1 can have different value combinations.
  • Several value combinations provided in the embodiments of the present application are described below.
  • FIG. 5 is a schematic diagram of the first structure of a P frame provided by the embodiment of the present application.
  • a P frame 501 includes 12 ⁇ 10 PBs. Each PB is 192 bytes in size.
  • M may be an integer multiple of 119.
  • N can be an integer multiple of 180. For example, when M is equal to 119, N is equal to 180. When M is 238, N is 360.
  • the transmission period is 12 ⁇ 10 PB.
  • the target cycle is the product of the transfer cycle and M.
  • M is the product of the transfer cycle and M.
  • the target period is smaller.
  • the shorter the target period is, the shorter the time the OTN equipment can adjust and add or delete the transmitted services. Therefore, the solutions disclosed in the embodiments of the present application can improve the dynamic adjustment capability of processing data frames.
  • the shorter the target period the OTN equipment can also find the abnormality of the OTN frame in a shorter time.
  • the abnormality of the OTN frame includes: the PB division in the 1st OTN frame and the N+1th OTN frame are different. And/or, PBs at the same position in the first OTN frame and the N+1th OTN frame carry data of different services. Therefore, in order to improve the dynamic adjustment capability and reliability of processing data frames, K may be the least common multiple of K1 and S2. At this time, M is equal to 119, and N is equal to 180.
  • the target cycle and the transfer cycle are only for different descriptions.
  • the target period can also be called First cycle or big transfer cycle etc.
  • the unit of the target cycle can be P frame, PB, byte, or OTN frame number, etc.
  • the target period may be M groups of PBs. Each group of PBs can also be called a P frame. Therefore, the target period may be M P frames.
  • each group of PBs includes R ⁇ C PBs. Therefore, the target period may be M ⁇ R ⁇ C PBs.
  • each PB may include 192 bytes. Therefore, the target period may be M x R x C x 192 bytes.
  • the target period may be N OTN frames.
  • the P frame includes the C1 group shared PB.
  • C1 is equal to 4.
  • Each group of shared PBs may correspond to a column of shared PBs in FIG. 5 .
  • Each set of shared PBs includes R1 shared PBs.
  • R1 is equal to 12.
  • the P frame also includes the exclusive PB of the C2 group. The value of C2 is 6.
  • Each group of exclusive PBs may correspond to a column of exclusive PBs in FIG. 5 .
  • Multiple independent PBs in an exclusive PB transmit data of the same service.
  • Each exclusive PB only transmits the data of one service. Share PB to transmit multiple data of multiple services.
  • the exclusive PB and the shared PB are only for different descriptions.
  • the exclusive PB may also be called a second PB, or an independent PB, and so on.
  • the shared PB may also be called the first PB, or the set PB, and so on.
  • FIG. 6 is a schematic diagram of the first structure of the exclusive PB and the shared PB provided by the embodiment of the present application.
  • the exclusive PB 601 can be obtained by mapping an OSU frame of a service.
  • Exclusive PB and OSU frames are the same size.
  • S1 is equal to 192
  • the size of the exclusive PB and OSU frame is 192 bytes.
  • a shared PB includes multiple data for multiple businesses. Therefore, the shared PB is divided into multiple sub-PBs.
  • shared PB 602 is divided into 12 sub-PBs.
  • the sequence numbers of the 12 sub-PBs are shown in Figure 6.
  • the 12 sub-PBs include sub-PBs 1-12.
  • the size of the shared PB 602 is 192 bytes.
  • Each sub-PB has a size of 16 bytes.
  • the OTN equipment can allocate 12 sub-PBs to 12 different services.
  • the 12 sub-PBs correspond to the 12 services one by one.
  • the OTN equipment can also allocate 12 sub-PBs to less than 12 services.
  • the OTN equipment allocates 12 sub-PBs to 10 services.
  • One of the 10 services is allocated 3 sub-PBs.
  • the other 9 services are assigned 1 sub-PB respectively.
  • it will be described by taking the OTN equipment allocating 12 sub-PBs to 12 different services as an example. It should be understood that in practical applications, some sub-PBs may not be allocated.
  • the OTN equipment allocates 12 sub-PBs to 11 services. Each service is assigned a sub-PB. One sub-PB remains unallocated.
  • a shared PB In a shared PB, one service is only allocated to one sub-PB.
  • the size of 1 sub-PB is 16 bytes.
  • the size of an OSU frame is 192 bytes. Therefore, OTN equipment needs a set of shared PBs to transmit an OSU frame.
  • a set of shared PBs includes R1 shared PBs. In FIG. 5, R1 is equal to 12.
  • the OTN device splits an OSU frame into 12 pieces of data. There is a one-to-one correspondence between the 12 pieces of data and the R1 sub-PBs in the R1 shared PBs.
  • Each sub-PB transfers 16 bytes of data. For example, FIG.
  • FIG. 7 is a first schematic structural diagram of mapping an OSU frame to a sub-PB provided in the embodiment of the present application.
  • the size of one OSU frame is 192 bytes.
  • the OTN device splits an OSU frame into 12 pieces of data.
  • the 12 pieces of data are in one-to-one correspondence with the 12 sub-PBs in the R1 shared PB.
  • each shared PB in the R1 shared PBs includes a sub-PB 1 .
  • the OTN equipment can also split the OSU frame of another service into 12 pieces of data.
  • the 12 pieces of data are in one-to-one correspondence with the 12 sub-PBs in the R1 shared PB.
  • G is approximately equal to 1.23895431Gbps.
  • C equals 10.
  • G1 is approximately equal to 123.895Mbps.
  • T1 is approximately equal to 12.4 microseconds.
  • G2 is approximately equal to 10.3246Mbps.
  • G1 is related to C.
  • G2 is related to C and R.
  • G1 can be greater than 100Mbps.
  • G2 can be less than 11Mbps.
  • G2 is also referred to as the transmission rate of each sub-PB.
  • the transfer rate of OPU1 is approximately equal to twice that of OPU0. Therefore, when the OPUk type of the OTN frame is OPU1, the OTN device needs to transmit approximately 2 ⁇ N OTN frames within a target period. 2 ⁇ N OTN frames include 2 ⁇ M groups of PBs. At this time, the OTN device can maintain two mapping relationships. A mapping relationship includes mapping relationships between N OTN frames and M groups of PBs. One mapping relationship includes mapping relationships between another N OTN frames and another M groups of PBs. Similarly, when the OPUk type of the OTN frame is OPU2, OPU3, OPU4 or OPUflex, the OTN device can maintain more mapping relationships according to a similar method. Table 2 is an example of G1 and G2 for different OPUk types.
  • C2 is equal to 6. Therefore, an OTN device can transmit 6 services running on G1 in one mapping relationship.
  • C1 is equal to 4.
  • each shared PB can transmit 12 services running on G2. Therefore, in a mapping relationship, an OTN device can transmit 54 services.
  • OTN can maintain 84 mapping relationships. Therefore, OTN equipment can transmit 54*84 services.
  • the OTN device can map the PBs to the payload area of the OTN frame in order from left to right and from top to bottom. For example, the OTN device first maps the PB in the first row and the first column in Figure 5 to the starting position of the payload area. The OTN device then maps the PB in the first row and the second column to the position after the start position. After mapping the PBs of the first row, the OTN device starts to map the PBs of the second row.
  • the first column of the P frame is the shared PB column.
  • the first PB of the shared PB column can be used as overhead PB. Some overhead content may be carried in the overhead PB. Therefore, in practical applications, the first column of a P frame can always be used as a shared PB column.
  • FIG. 5 to FIG. 7 are only one or more examples provided in the embodiments of the present application. In practical applications, those skilled in the art may make adaptive modifications to the above examples according to requirements.
  • the value of C1 may be 10.
  • the exclusive PB is not included in the P frame.
  • the OTN device divides a shared PB into 11 sub-PBs. Each sub-PB has a size of 16 bytes. The remaining 16 bytes are used as management fields.
  • FIG. 7 the OTN device splits one OSU frame into 24 pieces of data. The 12 pieces of data are in one-to-one correspondence with the 12 sub-PBs in a set of shared PBs. The remaining 12 pieces of data are in one-to-one correspondence with 12 sub-PBs 1 in another group of shared PBs.
  • FIG. 8 is a second schematic structural diagram of a P frame provided by an embodiment of the present application.
  • a P frame 801 includes 17 ⁇ 7 PBs. 17 ⁇ 7 PBs are the transmission period of the OTN equipment. Each PB is 192 bytes in size.
  • M may be an integer multiple of 2.
  • N can be an integer multiple of 3. For example, when M is equal to 2, N is equal to 3. When M is 4, N is 6.
  • the P frame includes the C1 group shared PB.
  • C1 is equal to 3.
  • Each group of shared PBs may correspond to a column of shared PBs in FIG. 8 .
  • a column of PBs of a P frame includes 17 PBs, that is, R is equal to 17.
  • a list of PBs includes a set of shared PBs and X overhead PBs. X is an integer greater than 0. For example, in FIG. 8, X is equal to 1.
  • a set of shared PBs includes 16 shared PBs, that is, R1 is equal to 16.
  • a P frame includes 3 overhead PBs. There is a one-to-one correspondence between the 3 overhead PBs and the 3 groups of shared PBs.
  • the overhead PB may be used to record the identifier of a service in a certain group of shared PBs corresponding to the overhead PB.
  • the service identifier may be a multiplex structure identifier (multiplex structure identifier, MSI).
  • the P frame also includes the exclusive PB of the C2 group. The value of C2 is 4. each group independently
  • the shared PBs may correspond to a column of exclusive PBs in FIG. 8 .
  • a list of exclusive PBs includes 17 exclusive PBs. Each exclusive PB only transmits the data of one service. Each shared PB transmits multiple data of multiple services.
  • FIG. 9 is a second structural schematic diagram of an exclusive PB and a shared PB provided by the embodiment of the present application. As shown in FIG.
  • the exclusive PB 901 can be obtained by mapping an OSU frame of a service.
  • Exclusive PB and OSU frames are the same size.
  • S1 is equal to 192
  • the size of the exclusive PB and OSU frame is 192 bytes.
  • a shared PB includes multiple data for multiple businesses. Therefore, the shared PB is divided into multiple sub-PBs.
  • shared PB 902 is divided into 16 sub-PBs.
  • the sequence numbers of the 16 sub-PBs are shown in Figure 9 .
  • the 16 sub-PBs include sub-PBs 1-16.
  • the size of the shared PB 902 is 192 bytes.
  • Each sub-PB has a size of 12 bytes.
  • OTN equipment can allocate 16 sub-PBs to 16 different services. There is a one-to-one correspondence between 16 sub-PBs and 16 services. At this time, one service is only allocated to one sub-PB.
  • the size of 1 sub-PB is 12 bytes. In the embodiment of the present application, it is assumed that the size of an OSU frame is 192 bytes. Therefore, OTN equipment needs a set of shared PBs to transmit an OSU frame.
  • a set of shared PBs includes R1 shared PBs. In FIG. 8, R1 is equal to 16.
  • the OTN device splits an OSU frame into 16 pieces of data. There is a one-to-one correspondence between 16 data and R1 shared PB. Each shared PB includes 12 bytes of data. For example, FIG.
  • FIG. 10 is a second schematic structural diagram of mapping an OUS frame into a sub-PB provided in the embodiment of the present application.
  • the size of one OSU frame is 192 bytes.
  • the OTN device splits an OUS frame into 16 pieces of data.
  • the 16 pieces of data are in one-to-one correspondence with the 16 sub-PBs in the R1 shared PB.
  • each shared PB in the R1 shared PBs includes a sub-PB 1 .
  • the OTN equipment can also split the OSU frame of another service into 16 pieces of data.
  • the 16 pieces of data are in one-to-one correspondence with the 16 sub-PBs in the R1 shared PB.
  • FIG. 11 provides a third schematic structural diagram of a P frame according to the embodiment of the present application.
  • the OTN device uses the first sub-PB of each shared PB as a management field.
  • the following takes the first column PB in the P frame 801 as an example for description.
  • the first column of PBs includes 1 overhead PB and 16 shared PBs.
  • the first PB in the first column PB is the overhead PB.
  • the OTN equipment divides each of the 16 shared PBs into 16 sub-PBs. As shown in FIG.
  • the 16 sub-PBs of the first shared PB are respectively 1-1, 1-2, 1-3, ..., 1-16.
  • the 16 sub-PBs of the second shared PB are respectively 2-1, 2-2, 2-3, ..., 2-16.
  • the 16 sub-PBs of the 16th shared PB are respectively 16-1, 16-2, 16-3, ..., 16-16.
  • Each sub-PB has a size of 12 bytes.
  • the OTN device uses the first sub-PB in each shared PB as a management field.
  • the management fields include 1-1, 2-1, 3-1, . . . , 16-1.
  • the OTN equipment can allocate the remaining 15 sub-PBs in each shared PB to 15 different services.
  • Each service corresponds to a sub-PB.
  • Each sub-PB has a size of 12 bytes.
  • Each service corresponds to an OSU frame.
  • 15 services correspond to 15 OSU frames.
  • the 15 OSU frames are OSU frame 1, OSU frame 2, . . . , OSU frame 15, respectively.
  • the OTN equipment needs 16 shared PBs to transmit one OSU frame.
  • the data of OSU frame 1 is transmitted through 16 sub-PBs.
  • the 16 sub-PBs are respectively 1-2, 2-2, 3-2, ..., 16-2.
  • the data of OSU frame 2 is transmitted through 16 sub-PBs.
  • the 16 sub-PBs are respectively 1-3, 2-3, 3-3, ..., 16-3.
  • the data of OSU frame 15 is transmitted through 16 sub-PBs.
  • the 16 sub-PBs are respectively 1-16, 2-16, 3-16, ..., 16-16.
  • each OSU frame may carry OSU overhead.
  • 1-2 can carry the 7-byte overhead of OSU frame 1.
  • 1-3 can carry the 7-byte overhead of OSU frame 2.
  • 1-16 can bear the 7-byte overhead of OSU frame 15.
  • G is approximately equal to 1.23895431Gbps.
  • C equals 7.
  • G1 is approximately equal to 176.993Mbps.
  • T1 is approximately equal to 8.7 microseconds.
  • Delay T2 T1 ⁇ 17 for traffic in each shared PB.
  • T2 is approximately equal to 148 microseconds.
  • the waiting delay for each shared PB is also T1.
  • C2 is equal to 4. Therefore, an OTN device can transmit four services running on G1 in one mapping relationship.
  • C1 is equal to 3.
  • each group of shared PBs can transmit 16 services running on G2. Therefore, in a mapping relationship, an OTN device can transmit 52 services.
  • the OPUk type of the OTN frame is OPU1, OPU2, OPU3 or OPU4, the OTN device can increase the number of transmitted services through multiple mapping relationships. For example, for OPU4, the OTN device can maintain 84 mapping relationships. Therefore, OTN equipment can transmit 52*84 services.
  • Table 3 is an example of G1 and G2 for different OPUk types.
  • the overhead field consists of 64 bytes.
  • R is 12.
  • C is 10.
  • S1 is 192.
  • the schematic structural diagram of the P frame is similar to FIG. 5 .
  • a P frame includes 12 ⁇ 10 PBs. Each PB is 192 bytes in size.
  • S2 192 ⁇ 79.
  • M may be an integer multiple of 79.
  • N can be an integer multiple of 120. For example, when M equals 79, N equals 129. When M is equal to 158, N is equal to 258.
  • G is approximately equal to 1.23374861962Gbps.
  • C equals 10.
  • T1 is approximately equal to 123.375Mbps.
  • T1 is approximately equal to 12.4 microseconds.
  • R is 8.
  • C is 12.
  • S1 is 192.
  • the P frame includes 8 ⁇ 12 PBs. Each PB is 192 bytes in size.
  • S2 192 ⁇ 79.
  • M can be an integer multiple of 79.
  • N can be an integer multiple of 96. For example, when M equals 79, N equals 96. When M is equal to 158, N is equal to 192.
  • a P-frame may include a C1 group shared PB. C1 is an integer less than or equal to 12.
  • Each group of shared PBs may include 8 shared PBs, that is, R1 is equal to 8.
  • the OTN equipment can divide the shared PB 6 into 8 sub-PBs.
  • the size of the shared PB is 192 bytes.
  • Each sub-PB has a size of 24 bytes.
  • G is approximately equal to 1.23374861962Gbps.
  • C equals 12.
  • G1 is approximately equal to 102.812Mbps.
  • T1 is approximately equal to 14.9 microseconds.
  • R is 8.
  • C is 12.
  • S1 is 192.
  • the P frame includes 8 ⁇ 12 PBs.
  • Each PB is 192 bytes in size.
  • S2 128 ⁇ 119.
  • M can be an integer multiple of 119.
  • N may be an integer multiple of 144.
  • a P-frame may include a C1 group shared PB.
  • C1 is an integer less than or equal to 12.
  • Each group of shared PBs may include 8 shared PBs, that is, R1 is equal to 8.
  • the OTN equipment can divide the shared PB into 8 sub-PBs.
  • the size of the shared PB is 192 bytes.
  • Each sub-PB has a size of 24 bytes.
  • G is approximately equal to 1.23895431Gbps.
  • C equals 12.
  • G1 is approximately equal to 103.246Mbps.
  • T1 is approximately equal to 14.9 microseconds.
  • S1 equals 192. In practical applications, S1 may also be other values. An example is given below.
  • R is 10.
  • C is 12.
  • S1 is 240.
  • the P frame includes 10 ⁇ 12 PBs.
  • Each PB is 240 bytes in size.
  • S2 128 ⁇ 119.
  • M can be an integer multiple of 119.
  • N can be an integer multiple of 225.
  • a P-frame may include a C1 group shared PB.
  • C1 is an integer less than or equal to 12.
  • Each group of shared PBs may include 10 shared PBs, that is, R1 is equal to 10.
  • the OTN equipment can divide the shared PB into 10 sub-PBs.
  • the size of the shared PB is 240 bytes.
  • Each sub-PB has a size of 24 bytes.
  • G is approximately equal to 1.23895431Gbps.
  • C equals 12.
  • G1 is approximately equal to 103.246Mbps.
  • T1 is approximately equal to 18.6 microseconds.
  • R is 8.
  • C is 12.
  • S1 is 128.
  • the P frame includes 8 ⁇ 12 PBs.
  • the size S1 of each PB is 128 bytes.
  • S2 128 ⁇ 119.
  • M can be an integer multiple of 119.
  • N can be an integer multiple of 96.
  • a P-frame may include a C1 group shared PB.
  • C1 is an integer less than or equal to 12.
  • Each group of shared PBs may include 8 shared PBs, that is, R1 is equal to 8.
  • the OTN equipment can divide the shared PB into 8 sub-PBs.
  • the size of the shared PB is 128 bytes.
  • Each sub-PB has a size of 16 bytes.
  • G is approximately equal to 1.23895431Gbps.
  • C equals 12.
  • G1 is approximately equal to 103.246Mbps.
  • T1 is approximately equal to 9.9 microseconds.
  • R is 10.
  • C is 12.
  • S1 is 240.
  • the P frame includes 10 ⁇ 12 PBs.
  • Each PB is 240 bytes in size.
  • S2 192 ⁇ 79.
  • M can be an integer multiple of 79.
  • N may be an integer multiple of 150.
  • a P-frame may include a C1 group shared PB.
  • C1 is an integer less than or equal to 12.
  • Each group of shared PBs may include 10 shared PBs, that is, R1 is equal to 10.
  • the OTN equipment can divide the shared PB into 10 sub-PBs.
  • the size of the shared PB is 240 bytes.
  • Each sub-PB has a size of 24 bytes.
  • G is approximately equal to 1.23374861962Gbps.
  • C equals 12.
  • G1 is approximately equal to 102.812Mbps.
  • T1 is approximately equal to 18.7 microseconds.
  • R is 8.
  • C is 12.
  • S1 is 128.
  • the P frame includes 8 ⁇ 12 PBs.
  • Each PB is 128 bytes in size.
  • S2 192 ⁇ 79.
  • M can be an integer multiple of 79.
  • N can be an integer multiple of 64.
  • a P-frame may include a C1 group shared PB.
  • C1 is an integer less than or equal to 12.
  • Each group of shared PBs may include 8 shared PBs, that is, R1 is equal to 8.
  • the OTN equipment can divide the shared PB into 8 sub-PBs.
  • the size of the shared PB is 128 bytes.
  • Each sub-PB has a size of 16 bytes.
  • G is approximately equal to 1.23374861962Gbps.
  • C equals 12.
  • G1 is approximately equal to 102.812Mbps.
  • Delay T1 of exclusive PB SI ⁇ G1.
  • T1 is approximately equal to 10 microseconds.
  • G2 is approximately equal to 12.8515Mbps.
  • the waiting delay for each shared PB is also T1.
  • Delay T2 T1 ⁇ 8 for traffic in each shared PB.
  • T2 is approximately equal to 80 microseconds.
  • R equals 17 and C equals 7.
  • R can be an integer multiple of 17 under the condition that C remains unchanged.
  • R can be 34, or 68, etc. These are described separately below.
  • R is 34.
  • C is 7.
  • S1 is 192.
  • the P frame includes 34 ⁇ 7 PBs.
  • Each PB is 192 bytes in size.
  • S2 128 ⁇ 119.
  • M can be an integer multiple of 1.
  • N can be an integer multiple of 3.
  • a P-frame may include a C1 group shared PB.
  • C1 is an integer less than or equal to 7.
  • Each group of shared PBs may include 32 shared PBs, that is, R1 is equal to 32.
  • the OTN equipment can divide the shared PB into 32 sub-PBs.
  • the size of the shared PB is 192 bytes.
  • the size of each sub-PB is 6 bytes.
  • Each column of PBs of a P frame includes 34 PBs.
  • a column of shared PBs consists of 32 PBs.
  • 32 PBs and 2 overhead PBs form a column of PBs.
  • the P frame also includes the exclusive PB of the C2 group.
  • Each set of exclusive PBs may include 34 exclusive PBs.
  • G is approximately equal to 1.23895431Gbps.
  • C equals 7.
  • G1 is approximately equal to 176.993Mbps.
  • T1 is approximately equal to 8.7 microseconds.
  • R is 68.
  • C is 7.
  • S1 is 192.
  • the P frame includes 68 ⁇ 7 PBs. Each PB is 192 bytes in size.
  • S2 128 ⁇ 119.
  • M can be an integer multiple of 1.
  • N can be an integer multiple of 6.
  • a P-frame may include a C1 group shared PB.
  • C1 is an integer less than or equal to 7.
  • Each group of shared PBs may include 64 shared PBs, that is, R1 is equal to 64.
  • the OTN equipment can divide the shared PB into 64 sub-PBs.
  • the size of the shared PB is 192 bytes. Each sub-PB has a size of 3 bytes.
  • Each column of PBs of a P frame includes 68 PBs.
  • a column of shared PBs consists of 64 PBs. 64 PBs and 4 overhead PBs form a column of PBs.
  • the P frame also includes the exclusive PB of the C2 group. Each set of exclusive PBs may include 68 exclusive PBs.
  • G is approximately equal to 1.23895431Gbps.
  • C equals 7.
  • G1 is approximately equal to 176.993Mbps.
  • T1 is approximately equal to 8.7 microseconds.
  • S can be 3, 6, 12, 16 or 24 (bytes). In practical applications, the value of S may be equal to 8 (bytes). Several examples of this are described below.
  • R is 24.
  • C is 10.
  • S1 is 192.
  • the P frame includes 24 ⁇ 10 PBs.
  • Each PB is 192 bytes in size.
  • S2 128 ⁇ 119.
  • M can be an integer multiple of 119.
  • N can be an integer multiple of 360.
  • a P-frame may include a C1 group shared PB.
  • C1 is an integer less than or equal to 10.
  • Each group of shared PBs may include 24 shared PBs, that is, R1 is equal to 24.
  • the OTN equipment can divide the shared PB into 24 sub-PBs.
  • the size of the shared PB is 192 bytes.
  • the size of each sub-PB is 8 bytes.
  • G is approximately equal to 1.23895431Gbps.
  • C equals 10.
  • G1 is approximately equal to 123.895431Mbps.
  • T1 is approximately equal to 12.4 microseconds.
  • R is 24.
  • C is 12.
  • S1 is 192.
  • the P frame includes 24 ⁇ 12 PBs.
  • Each PB is 192 bytes in size.
  • S2 128 ⁇ 119.
  • M can be an integer multiple of 119.
  • N can be an integer multiple of 432.
  • a P-frame may include a C1 group shared PB.
  • C1 is an integer less than or equal to 12.
  • Each group of shared PBs may include 24 shared PBs, that is, R1 is equal to 24.
  • the OTN equipment can divide the shared PB into 24 sub-PBs.
  • the size of the shared PB is 192 bytes.
  • the size of each sub-PB is 8 bytes.
  • G is approximately equal to 1.23895431Gbps.
  • C equals 12.
  • G1 is approximately equal to 103.246Mbps.
  • T1 is approximately equal to 14.88 microseconds.
  • the waiting delay for each shared PB is also T1.
  • the transfer rate of OPU1 is approximately equal to twice that of OPU0.
  • the OTN device needs to transmit approximately 2 ⁇ N OTN frames within a target period.
  • 2 ⁇ N OTN frames include 2 ⁇ M groups of PBs.
  • the OTN device can maintain two mapping relationships.
  • a mapping relationship includes mapping relationships between N OTN frames and M groups of PBs.
  • Another mapping relationship includes the mapping relationship between another N OTN frames and another M groups of PBs.
  • the OPUk type of the OTN frame is OPU2, OPU3, OPU4, the OTN device can maintain more mapping relationships according to a similar method.
  • Table 4 is an example of G1 and G2 for different OPUk types.
  • the OTN device can maintain the number of different mapping relationships, so as to control the transmission rate of each mapping relationship.
  • the OTN device can maintain 80, 83 or 84 mapping relationships.
  • R is 30.
  • C is 12.
  • S1 is 240.
  • the P frame includes 30 ⁇ 12 PBs.
  • Each PB is 240 bytes in size.
  • S2 128 ⁇ 119.
  • M can be an integer multiple of 119.
  • N can be an integer multiple of 675.
  • a P-frame may include a C1 group shared PB.
  • C1 is an integer less than or equal to 12.
  • Each group of shared PBs may include 30 shared PBs, that is, R1 is equal to 30.
  • the OTN equipment can divide the shared PB into 30 sub-PBs.
  • the size of the shared PB is 240 bytes.
  • the size of each sub-PB is 8 bytes.
  • G is approximately equal to 1.23895431Gbps.
  • C equals 12.
  • G1 is approximately equal to 103.246Mbps.
  • T1 is approximately equal to 18.6 microseconds.
  • the waiting delay for each shared PB is also T1.
  • each group of PBs may include exclusive PBs and shared PBs. Therefore, the OTN device can also add a shared identifier in N OTN frames. The shared ID is used to mark the C1 group shared PBs in each group of PBs. Similarly, the OTN device can also add an exclusive identifier in N OTN frames. The exclusive identifier is used to mark the exclusive PBs of the C2 group in each group of PBs. The shared identifier and/or the exclusive identifier may be located in the aforementioned overhead field or overhead PB.
  • a group of shared PBs in a P frame, can form a column of PBs with X overhead PBs.
  • a group of shared PBs (16 shared PBs) and 1 overhead PB form a column of PBs.
  • overhead PB may also exist in the column where the independent PB resides.
  • the total overhead PB in a P frame is equal to X ⁇ C.
  • a P-frame includes 4 sets of independent PBs. Each set of independent PBs corresponds to a column of PBs.
  • a column of PBs includes 17 PBs.
  • the 17 PBs include 16 independent PBs and one overhead PB.
  • the P frame includes 7 overhead PBs.
  • the M groups of PBs may include a P1 group of PBs and a P2 group of PBs.
  • P1 and P2 are integers greater than 0.
  • the sum of P1 and P2 is equal to M.
  • Each group of PBs in the P2 group includes the exclusive PBs of the C2 group and the shared PBs of the C1 group.
  • Each group of PBs in group P1 includes overhead PBs of group C3, exclusive PBs of group C4 and shared PBs of group C1. These are described separately below.
  • FIG. 13 is a fourth schematic structural diagram of a P frame provided by the embodiment of the present application.
  • a P frame 1301 includes PBs of 24 rows ⁇ 12 columns (in the figure, not all PBs are shown), including 24 ⁇ 12 PBs in total. 24 ⁇ 12 PBs are the transmission period of the OTN equipment.
  • Each PB is 192 bytes in size.
  • M may be an integer multiple of 119.
  • N can be an integer multiple of 432.
  • the P frame includes the C1 group shared PB.
  • C1 is equal to 6.
  • Each group of shared PBs may correspond to a column of shared PBs in FIG. 13 .
  • a column of PBs of a P frame includes 24 PBs, that is, R is equal to 24.
  • the P frame also includes the exclusive PB of the C2 group. In FIG. 13, C2 is equal to 6.
  • Each group of exclusive PBs may correspond to a column of exclusive PBs in FIG. 13 .
  • a column of exclusive PBs includes 24 exclusive PBs.
  • Each exclusive PB only transmits the data of one service.
  • Each group of exclusive PBs is used to carry data belonging to the same service.
  • Each shared PB transmits data of multiple services.
  • Each shared PB is 192 bytes in size.
  • Each shared PB includes 24 sub PB.
  • the size of each sub-PB is 8 bytes.
  • an OTN device may use one or more sub-PBs in the 24 sub-PBs as management fields.
  • the OTN device uses the first sub-PB of each shared PB as a management field.
  • the seventh column of PBs includes 24 shared PBs.
  • the OTN equipment divides each of the 24 shared PBs into 24 sub-PBs.
  • the 24 sub-PBs of the first shared PB are respectively 1-1, 1-2, 1-3, ..., 1-24.
  • the 24 sub-PBs of the second shared PB are respectively 2-1, 2-2, 2-3, ..., 2-24.
  • the 24 sub-PBs of the 24th shared PB are respectively 24-1, 24-2, 24-3, ..., 24-24.
  • the OTN device uses the first sub-PB in each shared PB as a management field. For example, as shown in FIG. 13 , 1-1, 2-1, 3-1, ..., 24-1 are management fields.
  • the OTN equipment can allocate the remaining 23 sub-PBs in each shared PB to 23 different services. Each service corresponds to a sub-PB. The size of each sub-PB is 8 bytes. Each service corresponds to an OSU frame. Among them, 23 services correspond to 23 OSU frames.
  • the 23 OSU frames are OSU frame 1, OSU frame 2, . . . , OSU frame 23, respectively. When the size of the OSU frame is 192 bytes, the OTN equipment needs 24 shared PBs to transmit one OSU frame. OSU frame 1 is transmitted through 24 sub-PBs.
  • the 24 sub-PBs are respectively 1-2, 2-2, 3-2, ..., 24-2.
  • OSU frame 2 is transmitted through 24 sub-PBs.
  • the 24 sub-PBs are respectively 1-3, 2-3, 3-3, ..., 24-3.
  • the OSU frame 23 is transmitted through 24 sub-PBs.
  • the 24 sub-PBs are respectively 1-24, 2-24, 3-24, ..., 24-24.
  • each OSU frame may carry OSU frame overhead.
  • the 7-byte overhead of OSU frame 1 can be carried in sub-PB 1-2.
  • the 7-byte overhead of OSU frame 2 can be carried in sub-PB 1-3.
  • the sub-PB 1-24 can carry the 7-byte overhead of the OSU frame 23.
  • FIG. 14 is a schematic structural diagram of a P frame including an overhead PB column provided by an embodiment of the present application.
  • a P frame 1401 includes 1 set of overhead PBs, 6 sets of exclusive PBs and 5 sets of shared PBs.
  • C3 is equal to 1
  • C4 is equal to 6
  • C1 is equal to 5.
  • One set of overhead PBs includes 24 overhead PBs, and one or more overhead PBs of the 24 overhead PBs can be used to transmit information related to multiple service data.
  • the relevant information may be MSI and/or other overheads of multiple service data.
  • Other overheads may be management information, management and control information of the service flow, or location information where the next overhead PB column appears, and the like.
  • the shared PBs and exclusive PBs of the M groups of PBs are used to transmit multiple service data.
  • FIG. 14 is only an example of the group C3 overhead PB provided in this application.
  • other P frames provided by the embodiment of the present application may also include group C3 overhead PB.
  • group C3 overhead PB when R is equal to 12, C is equal to 10, and C1 is equal to 6, C3 can be equal to 1 and C4 can be equal to 5.
  • the number of overhead PB columns may account for between 0.1% and 10% of the total number of PB columns.
  • the number T of overhead PB columns is equal to the product of P1 and C3.
  • the total number of PB columns is equal to the product of M and C.
  • each overhead PB column can be set at a fixed position in the P frame structure and/or be specially identified.
  • the P frame includes PBs with R rows and C columns.
  • a P frame includes a PB of R columns and C rows.
  • the OTN device can map the PB in the P frame to the payload area of the OTN frame in order from top to bottom and from left to right.
  • a P frame includes a PB of 1 row of R ⁇ C columns.
  • the OTN device can map the PB in the P frame to the payload area of the OTN frame in order from left to right.
  • the OTN device divides the entire shared PB into 12 sub-PBs.
  • FIG. 15 is a schematic structural diagram of a shared PB provided by an embodiment of the present application.
  • the OTN equipment divides the shared PB 1501 into 1 management field and 11 sub-PBs. 1 administrative field for OH representation.
  • the size of the management field may be the same as the size of 1 sub-PB. For example, when the size of the shared PB is 192 bytes, the size of one sub-PB is 16 bytes. The size of the management field is 16 bytes.
  • one shared PB can transmit data of up to 12 services, and a group of shared PBs can transmit up to 12 OSU frames.
  • a shared PB can transmit data of up to 11 services, and a group of shared PBs can transmit up to 11 OSU frames.
  • the management field may be used for operation administration and maintenance (OAM).
  • OAM may include tandem connection monitoring (TCM).
  • the OTN equipment can perform periodic data frame processing with N OTN frames as the target period. Therefore, for each target period, the OTN device can sort N OTN frames.
  • Each of the N OTN frames may carry a sequence identifier.
  • the sequence identifier is used to identify the sequence of each OTN frame in the target cycle.
  • the sequence identifier may be located in the aforementioned overhead field or overhead PB.
  • the sequence identifier can be a multi-frame indicator (OPU multi-frame indicator, OMFI).
  • OPU multi-frame indicator
  • OMFI can gradually change from 1, 2, 3, ..., 84.
  • N in the embodiments of this application is equal to 3.
  • OTN equipment changes the value of OMFI to 1 to 252 gradually.
  • the value used for OMFI and the remainder of division by 3 are used as sequence identifiers. For example, when the OMFI is equal to 4, it indicates that the current OTN frame is located in the first OTN frame in the target period.
  • the OMFI when the OMFI is equal to 9, it indicates that the current OTN frame is located in the third OTN frame in the target period.
  • the value of OMFI divided by 3 is 1.
  • the integer divided by the value of OMFI and 3 is 3.
  • the OTN device can process data according to the foregoing data frame processing method to obtain N data frames.
  • the OTN device can send N data frames to the receiving device.
  • the receiving device may be another OTN device or client.
  • the receiving device may map the N data frames into multiple service frames by using a data frame processing method.
  • FIG. 16 is a second schematic flowchart of a data frame processing method provided by an embodiment of the present application. As shown in FIG. 16, the method for processing a data frame includes the following steps.
  • the receiving device acquires N data frames.
  • the payload area of N data frames includes M groups of PBs. M and N are integers greater than 0.
  • Each group of PBs includes R ⁇ C PBs. R and C are integers greater than 1.
  • Each PB is S1 bytes in size. The size of each payload area in N OPU frames occupied by PB is S2 bytes.
  • M ⁇ R ⁇ C ⁇ S1 N ⁇ S2.
  • Each group of PBs includes group C1 shared PBs.
  • C1 is a positive integer less than or equal to C.
  • Each set of shared PBs includes R1 shared PBs.
  • R1 is a positive integer less than or equal to R.
  • Each of the R1 shared PBs includes multiple data of multiple services. The number of multiple data is greater than 1.
  • the number of multiple data is less than or equal to R.
  • N data frames and M groups of PBs reference may be made to the descriptions in FIGS. 3 to 14 above.
  • the method for processing the data frame will be described later by taking the P frame in FIG. 5 to FIG. 7 as an example.
  • step 1602 the receiving device maps M groups of PBs in the payload area of the N data frames into multiple service frames;
  • Multiple service frames can be multiple OSU frames.
  • the size of PB and OSU frame is 192 bytes.
  • a P frame includes 6 groups of independent PBs. Each of the 6 independent PBs carries data of the same service. Each independent PB in each group of PBs corresponds to an OSU frame. Therefore, when the OTN device allocates 6 groups of independent PBs to 6 independent services, the receiving device can obtain 6 groups of OSU frames of 6 services through the independent PBs in one P frame. There is a one-to-one correspondence between 6 services and 6 groups of OSU frames. Each service corresponds to 12 OSU frames.
  • a P frame includes 4 groups of shared PBs. Each of the 4 groups of shared PBs carries 12 OSU frames of 12 services.
  • the data of each OSU frame is evenly distributed among 12 shared PBs.
  • a shared PB includes 12 sub-PBs.
  • Each sub-PB carries one piece of data.
  • the size of one piece of data is 16 bytes. Therefore, through a set of shared PBs, the receiving device can obtain 12 ⁇ 12 data.
  • the receiving device combines 12 ⁇ 12 data to obtain 12 OSU frames of 12 services. Through 4 groups sharing PB, the receiving device can obtain 48 OSU frames of 48 services.
  • the receiving device can obtain 72 OSU frames of 6 independent services.
  • the receiving device can also obtain 48 OSU frames of 48 shared services.
  • One target period includes M P frames. Therefore, through M P frames, The receiving device can obtain 72 ⁇ M OSU frames of 6 independent services. Each independent service includes 12 ⁇ M OSU frames.
  • the receiving device can also obtain 48 ⁇ M OSU frames of 48 shared services. Each shared service includes M OSU frames.
  • FIG. 17 is a schematic structural diagram of a processing device for providing data frames according to an embodiment of the present application.
  • a data frame processing device 1700 includes a processor 1701 and a transceiver 1702 .
  • the data frame processing device 1700 may be the aforementioned OTN device and receiving device.
  • the processor 1701 is configured to execute the method steps in FIG. 4 to obtain N data frames.
  • the processing device 1700 may complete the above method steps in FIG. 4 through an integrated logic circuit of hardware in the processor 1701 or an instruction in the form of software.
  • the processor 1701 is configured to send N data frames to the transceiver 1702 .
  • the transceiver 1702 is used to send N data frames to the receiving device.
  • the transceiver 1702 is used to receive N data frames from the OTN device.
  • the processor 1701 is configured to execute the method steps in FIG. 16 to obtain multiple service frames. Specifically, the processing device 1700 may complete the above method steps in FIG. 16 through an integrated logic circuit of hardware in the processor 1701 or an instruction in the form of software.
  • the processing device 1700 may further include a memory 1703 .
  • the memory 1703 may be a non-volatile memory, such as a hard disk (hard disk drive, HDD), etc., or a volatile memory (volatile memory), such as a random-access memory (random-access memory, RAM).
  • the memory 1703 is any other medium that can be used to carry or store desired program codes in the form of instructions or data structures and can be accessed by a computer, but is not limited thereto.
  • the memory 1703 may be used to store N data frames or multiple service frames.
  • the memory 1703 can also be used to store instructions so that the process 1701 can be used to perform the steps as mentioned above in FIG. 4 or FIG. 15 .
  • the storage 1703 may also be used to store other instructions to configure parameters of the processor 1701 to implement corresponding functions.
  • the processor 1701 and the memory 1703 may be located in a tributary board; they may also be located in a single board that integrates tributary and lines.
  • the network device includes multiple processors 1701 and multiple memories 1703 .
  • Multiple processors 1701 are located on tributary boards.
  • Multiple memories 1703 are located on the circuit board. The tributary board and the circuit board cooperate to complete the foregoing method steps.
  • FIG. 17 may also be used to execute the method steps mentioned in the variants of the above-mentioned embodiments shown in the drawings or the alternative solutions, and details are not described herein again.
  • the processor 1701 in the embodiment of the present application may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may realize or execute Various methods, steps and logic block diagrams disclosed in the embodiments of the present application.
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the methods disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software units in the processor.
  • the program codes executed by the processor 1701 to implement the above methods may be stored in the memory 1703 .
  • the memory 1703 is coupled to the processor 1701.
  • the coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information exchange between devices, units or modules.
  • Processor 1701 may cooperate with memory 1703 .
  • the embodiments of the present application further provide a computer-readable storage medium.
  • a software program is stored in the storage medium, and when the software program is read and executed by one or more processors, the method provided by any one or more embodiments above can be implemented.
  • the computer-readable storage medium may include: a U disk, a mobile hard disk, a read-only memory, a random access memory, a magnetic disk or an optical disk, and other media capable of storing program codes.
  • the embodiment of the present application further provides a chip.
  • the chip includes a processor for implementing the above The function involved in any one or more embodiments, such as acquiring or processing the service frame or data frame involved in the above method.
  • the chip further includes a memory for necessary program instructions and data executed by the processor.
  • the chip may consist of chips, or may include chips and other discrete devices.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Time-Division Multiplex Systems (AREA)

Abstract

本申请提供了一种数据帧的处理方法,应用于光传送网络领域。数据帧的处理方法包括以下步骤:获取多个业务数据。将多个业务数据分别映射到多个业务帧中。将多个业务帧映射到N个数据帧的净荷区的M组PB。M和N为大于0的整数。每组PB包括R×C个PB。R和C为大于1的整数。每个PB的大小为S1字节。N个数据帧中的每一个净荷区被PB占用的大小为S2字节。M×R×C×S1=N×S2。每组PB包括C1组共享PB。每组共享PB包括R1个携带业务数据的共享PB。R1个共享PB中的每个共享PB包括多个业务的多个数据。在本申请中,通过让多个业务共享一个PB,从而降低处理时延、或提高传输效率。

Description

数据帧的处理方法和装置
本申请要求于2022年2月11日提交中国国家知识产权局、申请号为202210130541.0、申请名称为“数据帧的处理方法和装置”的中国专利申请的优先权,以及要求于2022年4月22日提交中国国家知识产权局、申请号为202210429110.4、申请名称为“数据帧的处理方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及光传送网络领域,尤其涉及数据帧的处理方法和装置。
背景技术
光传送网(optical transport network,OTN)作为一种骨干承载网络的核心技术,包括多种速率的光承载容器。例如,光数据单元0(optical data unit0,ODU0)帧为当前OTN技术的速率最小的承载容器,其速率约为1.25吉比特每秒(gigabit per second,Gbps),用于承载1Gbps的以太网业务数据。为提升承载效率,当前OTN的光承载容器采用时分复用技术。具体地,通过将一个高速率的承载容器划分为多个固定大小的净荷块(payload block,PB),用于实现多业务承载。
针对低速率业务的数据,使用PB传送时需要等待一个传送周期或者需要进行填充才能进行发送,这增加了传输时延或降低了传输效率。
发明内容
本申请提供了一种数据帧的处理方法和装置,通过让多个业务共享一个PB,可以降低处理时延、或提高传输效率。
本申请第一方面提供了一种数据帧的处理方法。数据帧的处理方法可以应用于数据帧的处理设备。数据帧的处理设备可以是OTN设备或城域传送网(metro transport network,MTN)设备等。后续将以OTN设备为例进行描述。数据帧的处理方法包括以下步骤:OTN设备获取多个业务数据。OTN设备将多个业务数据分别映射到多个业务帧中。业务帧可以是OSU帧或其它类似OSU帧结构的数据帧。OTN设备将多个业务帧映射到N个数据帧的净荷区的M组净荷块(payload block,PB)。其中,每组PB也可以称为一个P帧。数据帧可以为OTN帧、灵活以太网(flexible ethernet,FlexE)帧或MTN帧。M和N为大于0的整数。每组PB包括R×C个PB。R和C为大于1的整数。每个PB的大小为S1字节。N个数据帧中的每一个净荷区被PB占用的大小为S2字节。M×R×C×S1=N×S2。每组PB包括C1组共享PB。C1为小于或等于C的正整数。每组共享PB包括R1个共享PB。R1为小于或等于R的正整数。R1个共享PB中的每个共享PB用于承载多个业务的数据。多个业务的数量大于1。多个业务的数量小于或等于R。发送所述N个数据帧。
在本申请中,通过让多个业务共享一个PB,可以提前凑够1个PB,从而降低处理时延、或提高传输效率。并且,M个P帧的大小和N个S2的大小相同。因此,当OTN设备以N个数据帧为目标周期进行周期性的数据处理时,则第1个数据帧和第N+1个数据帧中的PB划分可以相同。第1个数据帧和第N+1个数据帧中同一位置的PB携带同一组业务的数据。 OTN设备可以根据这个特征校验数据帧的准确性。因此,本申请可以提高处理数据帧的可靠性。
在第一方面的一种可选方式中,每组PB还包括C2组独享PB。C1与C2的和等于C。每组独享PB包括R个独享PB。R个独享PB属于同一业务的数据。通过将一个P帧划分为C2组独享PB和C1组共享PB,可以提高业务传输的灵活性。
在第一方面的一种可选方式中,M组PB包括P1组PB和P2组PB。P1和P2为大于0的整数。P1与P2的和等于M。P1组PB中的每组PB包括C3组开销PB、C4组独享PB和C1组共享PB。C4组独享PB中的每组独享PB包括R个独享PB。每组独享PB用于承载属于同一业务的数据。C3组开销PB中的每组开销PB包括R个开销PB。R个开销PB用于承载多个业务数据的相关信息。C3和C4为大于0的整数。C3、C4与C1的和等于C。P2组PB中的每组PB还包括C2组独享PB。C1与C2的和等于C。每组独享PB包括R个独享PB。通过在P1组PB中增加C1组开销PB,可以将多个业务数据的相关信息随P帧高效、快速地传递到中间交换节点和/或宿端节点。因此,本申请实施例可以提高业务传输的可靠性。
在第一方面的一种可选方式中,T=(P1×C3)÷(M×C)。T的取值范围在0.001至0.1之间。T可以等于0.001或0.1。通过控制开销PB列的数量,有助于在传递多个业务数据的相关信息的基础上,减少开销PB的数量,从而提高传输效率。
在第一方面的一种可选方式中,多个数据中的每个数据的传输速率小于11兆比特每秒(million bits per second,Mbps)。和/或,R个独享PB中的每个独享PB的传输速率大于100Mbps。
在第一方面的一种可选方式中,K=M×R×C×S1,K1=R×C×S1。K为K1和S2的最小公倍数。当K为K1和S2的最小公倍数时,N和M的数值最小。此时,目标周期最小。其中,目标周期越小,OTN设备可以在更短的时间内调整和增删所传送的业务。因此,本申请可以提高处理数据帧的动态调整能力。
在第一方面的一种可选方式中,N个数据帧包括共享标识。共享标识用于标记C1组共享PB。接收端可以接收N个数据帧。接收端可以根据共享标识以不同的方式处理独享PB和共享PB。例如,对于独享PB,接收端可以直接将一个独享PB映射为一个光业务单元(optical service unit,OSU)帧。对于共享PB,接收端处理剩余的R1-1个共享PB。接收端对R1个共享PB中的数据进行组合。接收端将组合后的数据映射为多个业务的多个OSU帧。因此,通过增加共享标识,可以提高传输数据帧的可靠性。
在第一方面的一种可选方式中,N个数据帧为OTN帧。S2为4×3808,R1等于R。R为12。C为10。
在第一方面的一种可选方式中,每个PB的大小S1为192。M为119。N为180。
在第一方面的一种可选方式中,R为17的整数倍。C为7。
在第一方面的一种可选方式中,R为17。每个PB的大小S1为192。M为2。N为3。
在第一方面的一种可选方式中,R为34。每个PB的大小S1为192。M为1。N为3。
在第一方面的一种可选方式中,R为68。每个PB的大小S1为192。M为1。N为6。
在第一方面的一种可选方式中,N个数据帧为OTN帧。每个净荷区的大小为4×3808。S2为15168字节。每个净荷区还包括64字节的开销字段。其中,通过共享PB,会提高对PB管理的难度。因此,通过增加开销字段可以提高处理数据帧的可靠性。
在第一方面的一种可选方式中,R1等于所述R。R为12。C为10。
在第一方面的一种可选方式中,每个PB的大小S1为192。M为79。N为120。
在第一方面的一种可选方式中,开销字段包括多个业务的标识。共享PB中携带有多个业务的多个数据。因此,OTN设备可以通过多个业务的标识关联多个数据,进而提高处理数据帧的可靠性。
在第一方面的一种可选方式中,S=S1÷R。S等于8。当S等于8时,一个子PB的大小为64比特。此时,一个子PB的大小等于以太业务传输中64b/66b编码中的数据块的大小。因此,本申请中提供的P帧适配以太业务传输,从而可以降低业务时延,并降低处理的复杂度。
在第一方面的一种可选方式中,S2为4×3808。S1等于192。R1等于R。R为24。C为12。
在第一方面的一种可选方式中,S2为4×3808。S1等于192。R1等于R。R为24。C为10。
在第一方面的一种可选方式中,S2为4×3808。S1等于240。R1等于R。R为30。C为12。
在第一方面的一种可选方式中,R1和R的差值为X。每组共享PB还包括X个开销PB。X为大于0的整数。其中,通过让多个业务共享一个PB,会提高对PB管理的难度。因此,通过将部分PB作为开销PB可以提高处理数据帧的可靠性。
在第一方面的一种可选方式中,R1个共享PB中的每个共享PB还包括管理字段。管理字段用于操作、管理和维护OAM。其中,通过让多个业务共享一个共享PB,会提高对共享PB管理的难度。因此,通过在共享PB内增加管理字段可以提高处理数据帧的可靠性。
在第一方面的一种可选方式中,在每个共享PB中,管理字段的大小和多个业务的数据中的每个数据的大小相同。其中,通过限定管理字段的大小和每个数据的大小相同,可以方便OTN设备管理共享PB。
本申请第二方面提供了一种数据帧的处理装置。数据帧的处理装置包括处理器和收发器。处理器用于执行前述第一方面或第一方面中任意一种实施方式所述的方法,以得到N个数据帧。收发器用于发送N个数据帧。
本申请第三方面提供了一种数据帧的处理方法。数据帧的处理方法可以应用于数据帧的处理设备。数据帧的处理设备可以是OTN设备或其它设备。后续将以OTN设备为例进行描述。数据帧的处理方法包括以下步骤:OTN设备获取N个数据帧。N个数据帧的净荷区包括M组PB。M和N为大于0的整数。每组PB包括R×C个PB。R和C为大于1的整数。每个PB的大小为S1字节。N个OPU帧中的每一个净荷区被PB占用的大小为S2字节。M×R×C×S1=N×S2。每组PB包括C1组共享PB。C1为小于或等于C的正整数。每组共享PB包括R1个共享PB。R1为小于或等于R的正整数。R1个共享PB中的每个共享PB用于承载多个业务的数据。多个业务的数量大于1。多个业务的数量小于或等于R。OTN设备将N个数据帧中的M组PB映射到多个业务帧中。
本申请第四方面提供了一种数据帧的处理装置。数据帧的处理装置包括处理器和收发器。收发器用于接收N个数据帧。处理器用于执行前述第三方面所述的方法,以得到多个业务帧。
本申请第五方面提供了一种计算机存储介质,其特征在于,所述计算机存储介质中存储有指令,所述指令在计算机上执行时,使得所述计算机执行如第一方面或第一方面任意一种实施方式所述的方法;或使得所述计算机执行如第三方面所述的方法。
本申请第六方面提供了一种计算机程序产品,其特征在于,所述计算机程序产品在计算机上执行时,使得所述计算机执行如第一方面或第一方面任意一种实施方式所述的方法;或 使得所述计算机执行如第三方面所述的方法。
附图说明
图1为本申请提供的OTN的结构示意图;
图2为本申请提供的OTN设备的结构示意图;
图3为本申请提供的OSU帧映射到OTN帧的示意图;
图4为本申请实施例提供的数据帧的处理方法的第一个流程示意图;
图5为本申请实施例提供的P帧的第一个结构示意图;
图6为本申请实施例提供的独享PB和共享PB的第一个结构示意图;
图7为本申请实施例提供的OSU帧映射到子PB的第一个结构示意图;
图8为本申请实施例提供的P帧的第二个结构示意图;
图9为本申请实施例提供的独享PB和共享PB的第二个结构示意图;
图10为本申请实施例提供的OSU帧映射为子PB的第二个结构示意图;
图11为本申请实施例提供的P帧的第三个结构示意图;
图12为本申请实施例提供的净荷区的结构示意图;
图13为本申请实施例提供的P帧的第四个结构示意图;
图14为本申请实施例提供的包括开销PB列的P帧的结构示意图;
图15为本申请实施例提供的共享PB的结构示意图;
图16为本申请实施例提供的数据帧的处理方法的第二个流程示意图;
图17为本申请实施例提供数据帧的处理设备的结构示意图。
具体实施方式
首先,对本申请中的部分用语进行解释说明,以便于本领域技术人员理解。
1)、多个指两个或两个以上。“和/或”描述关联对象的关联关系,可以存在三种关系。例如,A和/或B可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请的描述中,“第一”、“第二”、“独享”、“共享”等词汇仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。“独享”和“共享”等区分术语也可以替代为第一或第二。
2)、本申请提到的A映射到B中指的是将A封装进B中。例如,将光业务单元(optical service unit,OSU)帧映射到OTN帧中指的是将OSU帧或者OSU信号封装到OTN帧中。
3)、除非特殊说明,一个实施例中针对一些技术特征的具体描述也可以应用于解释其他实施例提及对应的技术特征。例如,一个实施例中针对净荷块包含的开销和含义也可以应用于其他实施例中提及的净荷块。又如,针对光传送网帧的具体举例和说明等可以应用到不同的具体实施例中提及到的光传送网帧或者用于替换光传送网帧的具体示例。此外,为了更加明显地体现不同实施例中的组件的关系,本申请采用相同或相似的附图编号来表示不同实施例中功能相同或相似的组件或方法步骤。
本申请实施例适用于光传送网或城域传送网等光网络。光传送网包括OTN或灵活以太网(flexible ethernet,FlexE)。在本申请后续的描述中,将以OTN为例进行描述。一个OTN通常由多个OTN设备通过光纤连接而成,可以根据具体需要组成如线型、环形和网状等不同的拓扑类型。图1为本申请提供的OTN的结构示意图。如图1所示,OTN 100由8个OTN设备101组成,即OTN设备A-H。其中,102指示光纤,用于连接两个设备。103指示客户业 务接口,用于接收或发送客户业务数据。如图1所示,OTN 100用于为客户设备1-3传输业务数据。客户设备通过客户业务接口跟OTN的设备相连。例如,图1中,客户设备1-3分别和OTN设备A、H和F相连。
根据实际的需要,一个OTN设备可能具备不同的功能。一般地来说,OTN设备分为光层设备、电层设备以及光电混合设备。光层设备指的是能够处理光层信号的设备,例如:光放大器(optical amplifier,OA)、光分插复用器(optical add-drop multiplexer,OADM)。OA也可被称为光线路放大器(optical line amplifier,OLA),主要用于对光信号进行放大,以支持在保证光信号的特定性能的前提下传输更远的距离。OADM用于对光信号进行空间的变换,从而使其可以从不同的输出端口(有时也称为方向)输出。电层设备指的是能够处理电层信号的设备,例如:能够处理OTN信号的设备。光电混合设备指的是具备处理光层信号和电层信号能力的设备。需要说明的是,根据具体的集成需要,一个OTN设备可以集合多种不同的功能。本申请提供的技术方案适用于不同形态和集成度的包含电层功能的OTN设备。
需要说明的是,本申请实施例中的光传送设备使用的数据帧结构可以是OTN帧。OTN帧用于承载各种业务数据,并提供丰富的管理和监控功能。OTN帧可以是光数据单元帧(optical data unit k,ODUk)、ODUCn、ODUflex,或者光通道传输单元k(optical transport unit k,OTUk),OTUCn,或者灵活OTN(FlexO)帧等。其中,ODU帧和OTU帧区别在于,OTU帧包括ODU帧和OTU开销。k代表了不同的速率等级。例如,k=1表示2.5Gbps,k=4表示100Gbps。Cn表示可变速率,具体为100Gbps的正整数倍的速率。除非特殊的说明,ODU帧指的是ODUk、ODUCn或ODUflex的任意一种,OTU帧指的是OTUk、OTUCn或者FlexO的任意一种。还需要指出的是,随着光传送网技术的发展,可能定义出新的类型的OTN帧,也适用于本申请。此外,本申请揭示的方法也可以适用于FlexE帧等其他光传送网帧。
图2为本申请提供的OTN设备的结构示意图。OTN设备200可以是图1中的OTN设备A-H中的任一设备。如图2所示,OTN设备200包括支路板201、交叉板202、线路板203、光层处理单板(图中未示出)以及系统控制和通信类单板204。
支路板201、交叉板202和线路板203用于处理电层信号。其中,支路板201用于实现各种客户业务的接收和发送,例如SDH业务、分组业务、以太网业务和/或前传业务等。更进一步地,支路板201可以划分为客户侧光收发模块和信号处理器。其中,客户侧光收发模块也可以称为光收发器,用于接收和/或发送业务数据。信号处理器用于实现对业务数据到数据帧的映射和解映射处理。交叉板202用于实现数据帧的交换,完成一种或多种类型的数据帧的交换。线路板203主要实现线路侧数据帧的处理。具体地,线路板203可以划分为线路侧光模块和信号处理器。其中,线路侧光模块可以称为光收发器,用于接收和/或发送数据帧。信号处理器用于实现对线路侧的数据帧的复用和解复用,或者映射和解映射处理。系统控制和通信类单板204用于实现系统控制。具体地,可以从不同的单板收集信息,或将控制指令发送到对应的单板上去。需要说明的是,除非特殊说明,具体的组件(例如信号处理器)可以是一个或多个,本申请不做限制。还需要说明的是,对设备包含的单板类型以及单板的功能设计和数量,本申请不做任何限制。需要说明的是,在具体的实现中,上述两个单板也可能设计为一个单板。此外,网络设备还可能包括备用电源、用于散热的风扇等。
应理解,图2只是本申请提供的OTN设备的一个示例。根据具体的需要,OTN设备包含的单板类型和数量可能不相同。例如,作为核心节点的OTN设备没有支路板201。又如,作为边缘节点的OTN设备有多个支路板201,或者没有光交叉板202。再如,只支持电层功能的OTN设备可能没有光层处理单板。
根据前面的描述可知,本申请以OTN为例对本申请中提供的方法进行描述。此时,业务帧可以OSU帧。数据帧可以为OTN帧或OPU帧。下面对OTN设备将OSU帧映射到OTN帧的过程进行示例性描述。
图3为本申请提供的OSU帧映射到OTN帧的示意图。如图3所示,OTN帧302为一种光传送网帧的示意。OTN帧302为4行多列的结构。OTN帧302包括开销区、净荷区和前向纠错(forward error correction,FEC)区域。其中,净荷区划分为多个净荷块(payload block,PB)。每个PB占据净荷区中固定的一定长度(也可以称为大小)的位置,例如192或128个字节。应理解,OTN帧302仅是一个示例。其他变形的OTN帧也适用于本申请。例如,不包含FEC区域的OTN帧。又如,行数和列数跟OTN帧302不同的帧结构。应理解,PB也可以称作时隙、时隙块或时间片等。本申请对其名称不做约束。应理解,在实际应用中,OTN帧的净荷区可能无法被划分为整数个PB。此时,某些PB的一部分在一个OTN帧的净荷区,PB的另一部分在另一个OTN帧的净荷区。
OSU帧301,如图3所示,包括开销区和净荷区。其中,OSU帧301的开销区用于承载开销信息。开销信息可以包括业务标识(service identifier,SID)、路径踪迹指示(trail Trace identifier,TTI)或比特间插奇偶校验(bit-interleaved parity,BIP)等。OSU帧301的净荷区用于承载业务数据。一个OSU帧的速率定义为基准速率的整数倍。其中,基准速率可以为2.6Mbps、5.2Mbps或10.4Mbps或前述这些数值的倍数等。应理解,图3所示的OSU帧结构仅是一个示例。在其他具体实现中,OSU帧还可以为包括开销子帧和净荷子帧的数据结构。对此,本申请不做限定。
如图3所示,OSU帧映射到OTN帧的净荷区。具体地,OSU帧映射到OTN帧的PB中。在一种可能的实现中,一个OSU帧映射到一个PB中。在另外一种可能的实现中,一个OSU帧映射到多个PB中。对此,本申请不做限定。为简化说明,后续的实施例以一个OSU帧映射到一个PB中为例。应理解,后续的实施例同样适用于一个OSU帧映射到多个PB中的情况。针对后者的技术方案变形,也属于本申请保护的范围。
为了简化和高效承载OSU帧,将OTN帧中连续的多个PB定义为一个传送周期。以传送周期为基本单位,来为OSU帧分配PB块。例如,假设OSU帧和PB的大小和速率相同,承载了同一业务的业务数据的10个OSU帧可以占据包括20个PB的传送周期中的编号为0-9的PB。为简化描述,将承载了同一业务数据的OSU帧称为OSU信号。一个OSU信号是携带了一个业务数据的比特流,该比特流的帧格式是OSU帧的帧格式。一个OSU信号可以包括一个或者多个OSU帧。
针对低速率业务的数据,使用PB传送时需要等待一个传送周期或者需要进行填充才能进行发送,这增加了传输时延或降低了传输效率。并且,对于不同OTN帧,PB的位置动态变化。因此,OTN帧的管理和维护存在较大的复杂性,从而导致处理OTN帧的可靠性较低。
为此,本申请提供了一种数据帧的处理方法。数据帧的处理方法可以应用于图1中所示的OTN设备。图4为本申请实施例提供的数据帧的处理方法的第一个流程示意图。如图4所示,数据帧的处理方法包括以下步骤。
在步骤401中,OTN设备获取多个业务数据。业务数据指的是光传送网络可以承载的业务。例如,可以是以太网业务、分组业务、无线回传业务等。
在步骤402中,OTN设备将多个业务数据分别映射到多个业务帧。业务帧可以是OSU帧或其它帧结构和OSU帧类似的数据帧。在本申请实施例中,将以业务帧是OSU帧为例进行描述。
在步骤403中,OTN设备将多个业务帧映射到N个数据帧的净荷区的M组净荷块PB。数据帧可以为OTN帧、FlexE帧或MTN帧等。在本申请实施例中,将以数据帧是OTN帧为例进行描述。OTN设备将多个OSU帧映射到N个OTN帧的净荷区。每个OTN帧包括一个净荷区。N个OTN帧包括N个净荷区。关于OTN帧和净荷区的描述,可以参考前述图3中的相关描述。
应理解,当数据帧承载了业务之后,OTN设备会将这些数据帧发送出去,以完成业务数据的传输。
N个净荷区包括M组PB。M和N为大于0的整数。每组PB包括R×C个PB。R×C可以是指R行C列。每组PB可以作为OTN设备的一个传送周期。此时,每组PB也可以称为一个P帧。R和C为大于1的整数。R和C可以是多种组合。例如,表一为本申请实施例中提供的R和C的几种组合示例。应理解,在实际应用中,本领域技术人员可以根据需求对R和C进行组合。
表一
在每组PB中,每个PB的大小为S1字节。S1可以为128、192或240等。N个OTN帧中的每一个净荷区被PB占用的大小为S2字节。例如,在图3中,S2为4×3808。每组PB包括C1组共享PB。C1为小于或等于C的正整数。每组共享PB包括R1个共享PB。R1为小于或等于R的正整数。R1个共享PB中的每个共享PB包括多个业务的多个数据。多个数据的数量大于1。多个数据的数量小于或等于R。
在本申请实施例中,通过让多个业务共享一个PB,可以降低处理时延、或提高传输效率并且,M×R×C×S1=N×S2,即M个P帧的大小和N个S2的大小相同。因此,当OTN设备以N个OTN帧为目标周期进行周期性的数据帧处理时,则第1个数据帧和第N+1个数据帧中的PB划分可以相同。第1个数据帧和第N+1个数据帧中同一位置的PB携带同一业务的数据。OTN设备可以根据这个特性校验数据帧的准确性。因此,本申请可以提高处理数据帧的可靠性。
根据前面的描述可知,R、C、S2和S1等可以有不同的取值组合。下面对本申请实施例中提供的几种取值组合进行描述。
在第一种示例中,R为12。C为10。S1为192。S2为4×3808。图5为本申请实施例提供的P帧的第一个结构示意图。如图5所示,P帧501包括12×10个PB。每个PB的大小为192字节。此时,K1=R×C×S1=12×10×192=120×192。S2=4×3808=128×119。K=M×120×192=N×128×119。此时,M可以为119的整数倍。N可以为180的整数倍。例如,当M等于119时,N等于180。当M等于238时,N等于360。
在图5中,传送周期为12×10个PB。目标周期为传送周期与M的乘积。当M或N越小时,目标周期越小。其中,目标周期越小,OTN设备可以在更短的时间内调整和增删所传送的业务。因此,本申请实施例揭示的方案能够提高处理数据帧的动态调整能力。目标周期越小,OTN设备也可以在更短的时间内发现OTN帧的异常。OTN帧的异常包括:第1个OTN帧和第N+1个OTN帧中的PB划分不相同。和/或,第1个OTN帧和第N+1个OTN帧中同一位置的PB携带不同业务的数据。因此,为了提高处理数据帧的动态调整能力和可靠性,K可以为K1和S2的最小公倍数。此时,M等于119,N等于180。
应理解,目标周期和传送周期只是为了区分描述。在实际应用中,目标周期也可以称为 第一周期或大传送周期等。目标周期的单位可以为P帧、PB、字节、或OTN帧数等。例如,目标周期可以为M组PB。每组PB又可以称为P帧。因此,目标周期可以为M个P帧。又例如,每组PB包括R×C个PB。因此,目标周期可以为M×R×C个PB。又例如,每个PB可以包括192字节。因此,目标周期可以为M×R×C×192个字节。又例如,目标周期可以为N个OTN帧。
在图5中,P帧包括C1组共享PB。C1等于4。每组共享PB可以对应图5中的一列共享PB。每组共享PB包括R1个共享PB。R1等于12。P帧还包括C2组独享PB。C2的值为6。每组独享PB可以对应图5中的一列独享PB。一列独享PB中的多个独立PB传输相同业务的数据。每个独享PB只传送一个业务的数据。共享PB传输多个业务的多个数据。应理解,独享PB和共享PB只是为了区别描述。在实际应用中,独享PB也可以称为第二PB、或独立PB等。共享PB也可以称为第一PB、或集合PB等。
图6为本申请实施例提供的独享PB和共享PB的第一个结构示意图。如图6所示,独享PB 601可以由一个业务的OSU帧映射得到。独享PB和OSU帧的大小相同。在S1等于192时,独享PB和OSU帧的大小为192字节。共享PB包括多个业务的多个数据。因此,共享PB被划分为多个子PB。例如,在图6中,共享PB 602被划分为12个子PB。12个子PB的序号如图6所示。12个子PB包括子PB 1~12。共享PB 602的大小为192字节。每个子PB的大小为16字节。
OTN设备可以将12个子PB分配给不同的12个业务。12个子PB和12个业务一一对应。OTN设备也可以将12个子PB分配给少于12个的业务。例如,OTN设备将12个子PB分配给10个业务。10个业务中的一个业务被分配了3个子PB。其它9个业务分别被分配了1个子PB。在后续的描述中,将以OTN设备将12个子PB分配给不同的12个业务为例进行描述。应理解,在实际应用中,可能存在某些子PB未被分配。例如,OTN设备将12个子PB分配给了11个业务。每个业务被分配了一个子PB。还剩余一个子PB未被分配。
在一个共享PB中,1个业务只被分配到了1个子PB。1个子PB的大小为16字节。在本申请实施例中,假设一个OSU帧的大小为192字节。因此,OTN设备需要一组共享PB来传输一个OSU帧。一组共享PB包括R1个共享PB。在图5中,R1等于12。OTN设备将一个OSU帧拆分为12个数据。12个数据和R1个共享PB中的R1个子PB一一对应。每个子PB传输16字节的数据。例如,图7为本申请实施例提供的OSU帧映射到子PB的第一个结构示意图。如图7所示,一个OSU帧的大小为192字节。OTN设备将一个OSU帧拆分为12个数据。12个数据和R1个共享PB中的12个子PB 1一一对应。其中,R1个共享PB中的每个共享PB包括一个子PB 1。类似地,OTN设备也可以将另一业务的OSU帧拆分为12个数据。12个数据和R1个共享PB中的12个子PB 2一一对应。
当OTN帧的OPUk类型为OPU0时,在图5中,每个独享PB的传输速率G1=G÷C。其中,G约等于1.23895431Gbps。C等于10。此时,G1约等于123.895Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于12.4微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于12。此时,G2约等于10.3246Mbps。每个共享PB的等待延迟也为T1。每个共享PB中的业务的延迟T2=T1×12。T2约等于149微秒。G1与C相关。G2与C、R相关。为了兼容传输速率差距交大的业务,G1可以大于100Mbps。G2可以小于11Mbps。G2也称为每个子PB的传输速率。
OPU1的传输速率约等于OPU0的2倍。因此,当OTN帧的OPUk类型为OPU1时,在一个目标周期内,OTN设备需要大约传输2×N个OTN帧。2×N个OTN帧包括2×M组PB。 此时,OTN设备可以维护2个映射关系。一个映射关系包括N个OTN帧和M组PB的映射关系。一个映射关系包括另外N个OTN帧和另外M组PB的映射关系。类似地,当OTN帧的OPUk类型为OPU2、OPU3、OPU4或OPUflex时,OTN设备可以根据类似的方法维护更多的映射关系。表二为不同OPUk类型时,G1和G2的示例。
表二
在图5中,C2等于6。因此,OTN设备在一个映射关系中可以传输6个运行在G1的业务。C1等于4。在图6中,每组共享PB可以传输12个运行在G2的业务。因此,在一个映射关系中,OTN设备可以传输54个业务。对于OPU4,OTN可以维护84个映射关系。因此,OTN设备可以传输54×84个业务。
在图5中,OTN设备可以按照从左到右、从上到下的顺序将PB映射到OTN帧的净荷区。例如,OTN设备先将图5中第一行第一列的PB映射到净荷区的起始位置。OTN设备再将第一行第二列的PB映射的起始位置的后一个位置。在映射完第一行的PB后,OTN设备开始映射第二行的PB。
在图5中,P帧的第一列为共享PB列。共享PB列的第一个PB可以作为开销PB。开销PB中可以携带一些开销内容。因此,在实际应用中,P帧的第一列可以总是作为共享PB列。
应理解,图5至图7只是本申请实施例中提供的一个或多个示例。在实际应用中,本领域技术人员可以根据需求对上述示例进行适应性的修改。例如,在图5中,C1的值可以为10。此时,P帧中不包括独享PB。又例如,在图6中,OTN设备将一个共享PB分为11个子PB。每个子PB的大小为16字节。剩余的16字节作为管理字段。再例如,在图7中,OTN设备将一个OSU帧拆分为24个数据。其中的12个数据和一组共享PB中的12个子PB 1一一对应。剩余的12个数据和另一组共享PB中的12个子PB 1一一对应。
在第二种示例中,R为17。C为7。S1为192。S2为4×3808。图8为本申请实施例提供的P帧的第二个结构示意图。如图8所示,P帧801包括17×7个PB。17×7个PB为OTN设备的传送周期。每个PB的大小为192字节。此时,K1=R×C×S1=17×7×192=119×192。S2=4×3808=128×119。K=M×119×192=N×128×119。此时,M可以为2的整数倍。N可以为3的整数倍。例如,当M等于2时,N等于3。当M等于4时,N等于6。
在图8中,P帧包括C1组共享PB。C1等于3。每组共享PB可以对应图8中的一列共享PB。P帧的一列PB包括17个PB,即R等于17。一列PB包括一组共享PB和X个开销PB。X为大于0的整数。例如,在图8中,X等于1。一组共享PB包括16个共享PB,即R1等于16。在图8中,P帧包括3个开销PB。3个开销PB和3组共享PB一一对应。开销PB可以用于记录开销PB对应的某组共享PB中的业务的标识。业务的标识可以是复用结构标识(multiplex structure identifier,MSI)。P帧还包括C2组独享PB。C2的值为4。每组独 享PB可以对应图8中的一列独享PB。一列独享PB包括17个独享PB。每个独享PB只传送一个业务的数据。每个共享PB传输多个业务的多个数据。图9为本申请实施例提供的独享PB和共享PB的第二个结构示意图。如图9所示,独享PB 901可以由一个业务的OSU帧映射得到。独享PB和OSU帧的大小相同。在S1等于192时,独享PB和OSU帧的大小为192字节。共享PB包括多个业务的多个数据。因此,共享PB被划分为多个子PB。例如,在图9中,共享PB 902被划分为16个子PB。16个子PB的序号如图9所示。16个子PB包括子PB 1~16。共享PB 902的大小为192字节。每个子PB的大小为12字节。
OTN设备可以将16个子PB分配给不同的16个业务。16个子PB和16个业务一一对应。此时,1个业务只被分配到了1个子PB。1个子PB的大小为12字节。在本申请实施例中,假设一个OSU帧的大小为192字节。因此,OTN设备需要一组共享PB来传输一个OSU帧。一组共享PB包括R1个共享PB。在图8中,R1等于16。OTN设备将一个OSU帧拆分为16个数据。16个数据和R1个共享PB一一对应。每个共享PB包括12字节的数据。例如,图10为本申请实施例提供的OUS帧映射为子PB的第二个结构示意图。如图10所示,一个OSU帧的大小为192字节。OTN设备将一个OUS帧拆分为16个数据。16个数据和R1个共享PB中的16个子PB 1一一对应。其中,R1个共享PB中的每个共享PB包括一个子PB 1。类似地,OTN设备也可以将另一业务的OSU帧拆分为16个数据。16个数据和R1个共享PB中的16个子PB 2一一对应。
在实际应用中,OTN设备可以将16个子PB中的一个子PB作为管理字段。例如,图11为本申请实施例提供P帧的第三个结构示意图。如图11所示,在图8的基础上,OTN设备将每个共享PB的第一个子PB作为管理字段。下面以P帧801中的第一列PB为例进行描述。第一列PB包括1个开销PB和16个共享PB。第一列PB中的第一个PB为开销PB。OTN设备将16个共享PB中的每个PB划分为16个子PB。如图11所示,第一个共享PB的16个子PB分别为1-1、1-2、1-3、……、1-16。第二个共享PB的16个子PB分别为2-1、2-2、2-3、……、2-16。依次类推,第16个共享PB的16个子PB分别为16-1、16-2、16-3、……、16-16。每个子PB的大小为12字节。
OTN设备将每个共享PB中的第一个子PB作为管理字段。管理字段包括1-1、2-1、3-1、……、16-1。OTN设备可以将每个共享PB中剩余的15个子PB分配给不同的15个业务。每个业务对应一个子PB。每个子PB的大小为12字节。每个业务对应一个OSU帧。其中,15个业务对应15个OSU帧。15个OSU帧分别为OSU帧1、OSU帧2、……、OSU帧15。当OSU帧的大小为192字节时,OTN设备需要16个共享PB来传输一个OSU帧。OSU帧1的数据通过16个子PB来传输。16个子PB分别为1-2、2-2、3-2、……、16-2。OSU帧2的数据通过16个子PB来传输。16个子PB分别为1-3、2-3、3-3、……、16-3。依次类推,OSU帧15的数据通过16个子PB来传输。16个子PB分别为1-16、2-16、3-16、……、16-16。
在实际应用中,每个OSU帧可以携带有开OSU开销。例如,在图10中,1-2中可以承载OSU帧1的7字节的开销。1-3中可以承载OSU帧2的7字节的开销。依次类推,1-16中可以承载OSU帧15的7字节的开销。
当OTN帧的OPUk类型为OPU0时,在图8中,每个独享PB的传输速率G1=G÷C。其中,G约等于1.23895431Gbps。C等于7。此时,G1约等于176.993Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于8.7微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于17。此时,G2约等于10.4114Mbps。每个共享PB中的业务的延迟T2=T1×17。T2约等于148微秒。每个共享PB的等待延迟也为T1。
在图8中,C2等于4。因此,OTN设备在一个映射关系中可以传输4个运行在G1的业务。C1等于3。在图9中,每组共享PB可以传输16个运行在G2的业务。因此,在一个映射关系中,OTN设备可以传输52个业务。当OTN帧的OPUk类型为OPU1、OPU2、OPU3或OPU4时,OTN设备可以通过多个映射关系来提高传输的业务数量。例如,对于OPU4,OTN设备可以维护84个映射关系。因此,OTN设备可以传输52×84个业务。表三为不同OPUk类型时,G1和G2的示例。
表三
在前面的两个示例中,S2为4×3808。在图3的OTN帧中,OTN帧的净荷区的大小为4×3808。在实际应用中,OTN设备可以在净荷区中划分出一部分字段作为开销字段。例如,图12为本申请实施例提供的净荷区的结构示意图。如图12所示,净荷区包括开销字段1201和子净荷区1202。在其中一种划分方式中,子净荷区1202的大小S2=4×3808-64=15168。开销字段包括64字节。
在第三个示例中,R为12。C为10。S1为192。S2=4×3808-64=15168=192×79。此时,P帧的结构示意图和图5类似。如图5所示,P帧包括12×10个PB。每个PB的大小为192字节。此时,K1=R×C×S1=12×10×192=120×192。S2=192×79。K=M×120×192=N×192×79。此时,M可以为79的整数倍。N可以为120的整数倍。例如,当M等于79时,N等于129。当M等于158时,N等于258。
关于独享PB和共享PB的描述,可以参考图6和图7中的相关描述。当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。其中,在扣除64字节的开销字段后,G约等于1.23374861962Gbps。C等于10。此时,G1约等于123.375Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于12.4微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于12。此时,G2约等于10.2812Mbps。每个共享PB的等待延迟也为T1。每个共享PB中的业务的延迟T2=T1×12。T2约等于149微秒。
在第四个示例中,R为8。C为12。S1为192。S2=4×3808-64=15168=192×79。此时,P帧包括8×12个PB。每个PB的大小为192字节。K1=R×C×S1=8×12×192=96×192。S2=192×79。K=M×96×192=N×192×79。M可以为79的整数倍。N可以为96的整数倍。例如,当M等于79时,N等于96。当M等于158时,N等于192。P帧可以包括C1组共享PB。C1为小于或等于12的整数。每组共享PB可以包括8个共享PB,即R1等于8。OTN设备可以将共享PB 6划分为8个子PB。共享PB的大小为192字节。每个子PB的大小为24字节。
当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。其中,在扣除 64字节的开销字段后,G约等于1.23374861962Gbps。C等于12。此时,G1约等于102.812Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于14.9微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于8。此时,G2约等于12.8515Mbps。每个共享PB的等待延迟也为T1。每个共享PB中的业务的延迟T2=T1×8。T2约等于120微秒。
在第五个示例中,R为8。C为12。S1为192。S2=4×3808=128×119。此时,P帧包括8×12个PB。每个PB的大小为192字节。K1=R×C×S1=8×12×192=96×192。S2=128×119。K=M×96×192=N×128×119。M可以为119的整数倍。N可以为144的整数倍。P帧可以包括C1组共享PB。C1为小于或等于12的整数。每组共享PB可以包括8个共享PB,即R1等于8。OTN设备可以将共享PB划分为8个子PB。共享PB的大小为192字节。每个子PB的大小为24字节。
当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。其中,G约等于1.23895431Gbps。C等于12。此时,G1约等于103.246Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于14.9微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于8。此时,G2约等于12.9508Mbps。每个共享PB的等待延迟也为T1。每个共享PB中的业务的延迟T2=T1×8。T2约等于119微秒。
在前面的示例中,S1等于192。在实际应用中,S1还可以是其它数值。下面进行举例说明。
在第六个示例中,R为10。C为12。S1为240。S2=4×3808=128×119。此时,P帧包括10×12个PB。每个PB的大小为240字节。K1=R×C×S1=10×12×240=120×240。S2=128×119。K=M×120×240=N×128×119。M可以为119的整数倍。N可以为225的整数倍。P帧可以包括C1组共享PB。C1为小于或等于12的整数。每组共享PB可以包括10个共享PB,即R1等于10。OTN设备可以将共享PB划分为10个子PB。共享PB的大小为240字节。每个子PB的大小为24字节。
当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。其中,G约等于1.23895431Gbps。C等于12。此时,G1约等于103.246Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于18.6微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于10。此时,G2约等于12.3246Mbps。每个共享PB的等待延迟也为T1。每个共享PB中的业务的延迟T2=T1×10。T2约等于186微秒。
在第七个示例中,R为8。C为12。S1为128。S2=4×3808=128×119。此时,P帧包括8×12个PB。每个PB的大小S1为128字节。K1=R×C×S1=8×12×128=96×128。S2=128×119。K=M×96×128=N×128×119。M可以为119的整数倍。N可以为96的整数倍。P帧可以包括C1组共享PB。C1为小于或等于12的整数。每组共享PB可以包括8个共享PB,即R1等于8。OTN设备可以将共享PB划分为8个子PB。共享PB的大小为128字节。每个子PB的大小为16字节。
当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。其中,G约等于1.23895431Gbps。C等于12。此时,G1约等于103.246Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于9.9微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于8。此时,G2约等于12.9058Mbps。每个共享PB的等待延迟也为T1。每个共享PB中的业务的延迟T2=T1×8。T2约等于79微秒。
在第八个示例中,R为10。C为12。S1为240。S2=4×3808-64=15168=192×79。此时,P帧包括10×12个PB。每个PB的大小为240字节。K1=R×C×S1=10×12×240=120×240。 S2=192×79。K=M×120×240=N×192×79。M可以为79的整数倍。N可以为150的整数倍。P帧可以包括C1组共享PB。C1为小于或等于12的整数。每组共享PB可以包括10个共享PB,即R1等于10。OTN设备可以将共享PB划分为10个子PB。共享PB的大小为240字节。每个子PB的大小为24字节。
当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。其中,在扣除64字节的开销字段后,G约等于1.23374861962Gbps。C等于12。此时,G1约等于102.812Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于18.7微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于10。此时,G2约等于10.2812Mbps。每个共享PB的等待延迟也为T1。每个共享PB中的业务的延迟T2=T1×10。T2约等于187微秒。
在第九个示例中,R为8。C为12。S1为128。S2=4×3808-64=15168=192×79。此时,P帧包括8×12个PB。每个PB的大小为128字节。K1=R×C×S1=8×12×128=96×128。S2=192×79。K=M×96×128=N×192×79。M可以为79的整数倍。N可以为64的整数倍。P帧可以包括C1组共享PB。C1为小于或等于12的整数。每组共享PB可以包括8个共享PB,即R1等于8。OTN设备可以将共享PB划分为8个子PB。共享PB的大小为128字节。每个子PB的大小为16字节。
当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。其中,在扣除64字节的开销字段后,G约等于1.23374861962Gbps。C等于12。此时,G1约等于102.812Mbps。独享PB的延迟T1=SI÷G1。T1约等于10微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于8。此时,G2约等于12.8515Mbps。每个共享PB的等待延迟也为T1。每个共享PB中的业务的延迟T2=T1×8。T2约等于80微秒。
在前面的其中一个示例中,R等于17,C等于7。在实际应用中,在C不变的情况下,R可以为17的整数倍。例如,R可以为34、或68等。下面对此进行分别描述。
在第十个示例中,R为34。C为7。S1为192。S2=4×3808=128×119。此时,P帧包括34×7个PB。每个PB的大小为192字节。K1=R×C×S1=34×7×192=238×192。S2=128×119。K=M×238×192=N×128×119。M可以为1的整数倍。N可以为3的整数倍。P帧可以包括C1组共享PB。C1为小于或等于7的整数。每组共享PB可以包括32个共享PB,即R1等于32。OTN设备可以将共享PB划分为32个子PB。共享PB的大小为192字节。每个子PB的大小为6字节。P帧的每一列PB包括34个PB。一列共享PB包括32个PB。32个PB和2个开销PB组成一列PB。P帧还包括C2组独享PB。每组独享PB可以包括34个独享PB。
当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。G约等于1.23895431Gbps。C等于7。此时,G1约等于176.993Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于8.7微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于34。此时,G2约等于5.2Mbps。每个共享PB的等待延迟也为T1。每个共享PB中的业务的延迟T2=T1×34。T2约等于295微秒。
在第十一个示例中,R为68。C为7。S1为192。S2=4×3808=128×119。此时,P帧包括68×7个PB。每个PB的大小为192字节。K1=R×C×S1=68×7×192=476×192。S2=128×119。K=M×476×192=N×128×119。M可以为1的整数倍。N可以为6的整数倍。P帧可以包括C1组共享PB。C1为小于或等于7的整数。每组共享PB可以包括64个共享PB,即R1等于64。OTN设备可以将共享PB划分为64个子PB。共享PB的大小为192字节。每个子PB的大小为3字节。P帧的每一列PB包括68个PB。一列共享PB包括64个PB。64个PB和4个开销PB组成一列PB。P帧还包括C2组独享PB。每组独享PB可以包括68个独享PB。
当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。G约等于1.23895431Gbps。C等于7。此时,G1约等于176.993Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于8.7微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于68。此时,G2约等于2.6Mbps。每个共享PB的等待延迟也为T1。每个共享PB中的业务的延迟T2=T1×68。T2约等于590微秒。
在前述的示例中,共享PB中的每个子PB大小S=S1÷R。S可以为3、6、12、16或24(字节)。在实际应用中,S的值可以等于8(字节)。下面对此提供的几种示例进行描述。
在第十二个示例中,R为24。C为10。S1为192。S2=4×3808=128×119。此时,P帧包括24×10个PB。每个PB的大小为192字节。K1=R×C×S1=24×10×192=240×192。S2=128×119。K=M×240×192=N×128×119。M可以为119的整数倍。N可以为360的整数倍。P帧可以包括C1组共享PB。C1为小于或等于10的整数。每组共享PB可以包括24个共享PB,即R1等于24。OTN设备可以将共享PB划分为24个子PB。共享PB的大小为192字节。每个子PB的大小为8字节。
当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。其中,G约等于1.23895431Gbps。C等于10。此时,G1约等于123.895431Mbps。独享PB的等待延迟T1=S1÷G1(S1的单位需要转换为比特)。T1约等于12.4微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于24。此时,G2约等于5.1623Mbps。每个共享PB的等待延迟也为T1。每个共享PB中的业务的延迟T2=T1×24。T2约等于297.6微秒。
在第十三个示例中,R为24。C为12。S1为192。S2=4×3808=128×119。此时,P帧包括24×12个PB。每个PB的大小为192字节。K1=R×C×S1=24×10×192=288×192。S2=128×119。K=M×288×192=N×128×119。M可以为119的整数倍。N可以为432的整数倍。P帧可以包括C1组共享PB。C1为小于或等于12的整数。每组共享PB可以包括24个共享PB,即R1等于24。OTN设备可以将共享PB划分为24个子PB。共享PB的大小为192字节。每个子PB的大小为8字节。
当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。其中,G约等于1.23895431Gbps。C等于12。此时,G1约等于103.246Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于14.88微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于24。此时,G2约等于4.3Mbps。每个共享PB的等待延迟也为T1。OPU1的传输速率约等于OPU0的2倍。因此,当OTN帧的OPUk类型为OPU1时,在一个目标周期内,OTN设备需要大约传输2×N个OTN帧。2×N个OTN帧包括2×M组PB。此时,OTN设备可以维护2个映射关系。一个映射关系包括N个OTN帧和M组PB的映射关系。另一个映射关系包括另外N个OTN帧和另外M组PB的映射关系。类似地,当OTN帧的OPUk类型为OPU2、OPU3、OPU4时,OTN设备可以根据类似的方法维护更多的映射关系。表四为不同OPUk类型时,G1和G2的示例。其中,对于相同的OPUk类型,OTN设备可以维护不同的映射关系的数量,从而控制每个映射关系的传输速率。例如,在表四中,当OTN帧的OPUk类型为OPU4时,OTN设备可以维护80、83或84个映射关系。

表四
在第十四个示例中,R为30。C为12。S1为240。S2=4×3808=128×119。此时,P帧包括30×12个PB。每个PB的大小为240字节。K1=R×C×S1=30×12×240=360×240。S2=128×119。K=M×360×240=N×128×119。M可以为119的整数倍。N可以为675的整数倍。P帧可以包括C1组共享PB。C1为小于或等于12的整数。每组共享PB可以包括30个共享PB,即R1等于30。OTN设备可以将共享PB划分为30个子PB。共享PB的大小为240字节。每个子PB的大小为8字节。
当OTN帧的OPUk类型为OPU0时,每个独享PB的传输速率G1=G÷C。其中,G约等于1.23895431Gbps。C等于12。此时,G1约等于103.246Mbps。独享PB的等待延迟T1=S1÷G1。T1约等于18.6微秒。每个共享PB中每个数据的传输速率G2=G÷(C×R)。R等于30。此时,G2约等于3.44Mbps。每个共享PB的等待延迟也为T1。
根据前面的示例可知,每组PB可以包括独享PB和共享PB。因此,OTN设备还可以在N个OTN帧中添加共享标识。共享标识用于标记每组PB中的C1组共享PB。类似地,OTN设备还可以在N个OTN帧中添加独享标识。独享标识用于标记每组PB中的C2组独享PB。共享标识和/或独享标识可以位于前述中的开销字段或开销PB内。
根据前面的示例可知,在P帧中,一组共享PB可以和X个开销PB组成一列PB。例如,在图8中,一组共享PB(16个共享PB)和1个开销PB组成一列PB。在实际应用中,独立PB所在的列也可以存在开销PB。在其中一种方式中,P帧中总的开销PB的数量等于X×C。例如,在图8中,P帧包括4组独立PB。每组独立PB对应一列PB。一列PB包括17个PB。17个PB包括16个独立PB和一个开销PB。此时,P帧包括7个开销PB。在实际应用中,M组PB可以包括P1组PB和P2组PB。P1和P2为大于0的整数。P1与P2的和等于M。P2组PB中的每组PB包括C2组独享PB和C1共享PB。P1组PB中的每组PB包括C3组开销PB、C4组独享PB和C1组共享PB。下面对此进行分别描述。
P2组PB中的每组PB包括C2组独享PB和C1组共享PB。C1与C2的和等于C。每组独享PB包括R个独享PB。下面以前述第十三个示例中的示例为例进行描述。图13为本申请实施例提供的P帧的第四个结构示意图。如图13所示,P帧1301包括24行×12列的PB(在图中,未展示所有的PB),总共包括24×12个PB。24×12个PB为OTN设备的传送周期。每个PB的大小为192字节。此时,K1=R×C×S1=24×12×192=288×192。S2=4×3808=128×119。K=M×288×192=N×128×119。此时,M可以为119的整数倍。N可以为432的整数倍。
P帧包括C1组共享PB。在图13中,C1等于6。每组共享PB可以对应图13中的一列共享PB。P帧的一列PB包括24个PB,即R等于24。P帧还包括C2组独享PB。在图13中,C2等于6。每组独享PB可以对应图13中的一列独享PB。一列独享PB包括24个独享PB。每个独享PB只传送一个业务的数据。每组独享PB用于承载属于同一业务的数据。每个共享PB传输多个业务的数据。每个共享PB大小为192字节。每个共享PB包括24个子 PB。每个子PB的大小为8字节。在实际应用中,OTN设备可以将24个子PB中的一个或多个子PB作为管理字段。例如,在图13中,OTN设备将每个共享PB的第一个子PB作为管理字段。下面以P帧1301中的第七列PB为例进行描述。第七列PB包括24个共享PB。OTN设备将24个共享PB中的每个PB划分为24个子PB。如图13所示,第一个共享PB的24个子PB分别为1-1、1-2、1-3、……、1-24。第二个共享PB的24个子PB分别为2-1、2-2、2-3、……、2-24。依次类推,第24个共享PB的24个子PB分别为24-1、24-2、24-3、……、24-24。
OTN设备将每个共享PB中的第一个子PB作为管理字段。例如,如图13所示,1-1、2-1、3-1、……、24-1为管理字段。OTN设备可以将每个共享PB中剩余的23个子PB分配给不同的23个业务。每个业务对应一个子PB。每个子PB的大小为8字节。每个业务对应一个OSU帧。其中,23个业务对应23个OSU帧。23个OSU帧分别为OSU帧1、OSU帧2、……、OSU帧23。当OSU帧的大小为192字节时,OTN设备需要24个共享PB来传输一个OSU帧。OSU帧1通过24个子PB来传输。24个子PB分别为1-2、2-2、3-2、……、24-2。OSU帧2通过24个子PB来传输。24个子PB分别为1-3、2-3、3-3、……、24-3。依次类推,OSU帧23通过24个子PB来传输。24个子PB分别为1-24、2-24、3-24、……、24-24。
在实际应用中,每个OSU帧可以携带有OSU帧的开销。例如,在图13中,子PB 1-2中可以承载OSU帧1的7字节的开销。子PB 1-3中可以承载OSU帧2的7字节的开销。依次类推,子PB 1-24中可以承载OSU帧23的7字节的开销。
P1组PB中的每组PB包括C3组开销PB、C4组独享PB和C1组共享PB。C3和C4为大于0的整数。C3、C4与C1的和等于C。图14为本申请实施例提供的包括开销PB列的P帧的结构示意图。如图14所示,P帧1401包括1组开销PB、6组独享PB和5组共享PB。此时,C3等于1,C4等于6,C1等于5。关于独享PB和共享PB的描述,可以参考图13的相关描述,在此不再赘述。1组开销PB包括24个开销PB,这24个开销PB的一个或多个开销PB可以用于传输多个业务数据的相关信息。具体地,相关信息可以是多个业务数据的MSI和/或其它开销。其它开销可以是业务流的管理信息、管控信息、或下一个开销PB列出现的位置信息等。M组PB的共享PB和独享PB用于传输多个业务数据。
应理解,图14只是本申请提供的C3组开销PB的一个示例。在实际应用中,本申请实施例提供的其它P帧中也可以包括C3组开销PB。例如,当R等于12,C等于10,C1等于6时,C3可以等于1,C4可以等于5。
在实际应用中,在M组PB中,开销PB列的数量可以占总的PB列的数量的0.1%至10%之间。其中,开销PB列的数量T等于P1和C3的乘积。总的PB列的数量等于M和C的乘积。在实际应用中,每一个开销PB列可以设置在P帧结构的固定位置和/或予以特殊标识。
在本申请提供的P帧的示例中,P帧包括R行C列的PB。应理解,在实际应用中,P帧存在其它的表现形式。例如,P帧包括R列C行的PB。此时,OTN设备可以按照从上到下、从左到右的顺序将P帧中的PB映射到OTN帧的净荷区。又例如,P帧包括1行R×C列的PB。OTN设备可以按照从左到右的顺序将P帧中的PB映射到OTN帧的净荷区。在前述图6中,OTN设备将整个共享PB分为12个子PB。12个子PB用于传输业务的数据。在实际应用中,OTN设备可以将共享PB分为管理字段和多个子PB。例如,图15为本申请实施例提供的共享PB的结构示意图。如图15所示,OTN设备将共享PB1501分为1个管理字段和11个子PB。1个管理字段用于OH表示。管理字段的大小可以和1个子PB的大小相同。例如,共享PB的大小为192字节时,1个子PB的大小为16字节。管理字段的大小为16字节。在 前述图6中,一个共享PB最多可以传输12个业务的数据,一组共享PB最多可以传输12个OSU帧。在图13中,一个共享PB最多可以传输11个业务的数据,一组共享PB最多可以传输11个OSU帧。管理字段可以用于操作、管理和维护(operation administration and maintenance,OAM)。OAM可以包括串联连接监视(tandem connection monitoring,TCM)。
根据前面的描述可知,OTN设备可以以N个OTN帧为目标周期进行周期性的数据帧处理。因此,对于每个目标周期,OTN设备可以对N个OTN帧排序。N个OTN帧中每个OTN帧中可以携带有顺序标识。顺序标识用于标识每个OTN帧在目标周期中的排序。顺序标识可以位于前述开销字段或开销PB内。
为了节约传输资源,顺序标识可以为多帧指示器(OPU multi-frame indicator,OMFI)。例如,OPU4的传输速率约等于84个OPU0。因此,在OPU4的OTN帧中,OMFI可以从1、2、3、…、84逐渐变化。假设本申请中实施例中的N等于3。OTN设备将OMFI的数值改为1至252逐渐变化。用于OMFI的数值和3相除的余数作为顺序标识。例如,当OMFI等于4时,表征当前OTN帧位于目标周期中的第一个OTN帧。又例如,当OMFI等于9时,表征当前OTN帧位于目标周期中的第三个OTN帧。用于OMFI的数值和3相除的整数加1作为原来的OMFI。例如,当OMFI等于4时,OMFI的数值和3相除的整数为1。又例如,当OMFI等于9时,OMFI的数值和3相除的整数为3。
OTN设备可以根据前述数据帧的处理方法处理数据,得到N个数据帧。OTN设备可以向接收设备发送N个数据帧。接收设备可以是另一OTN设备或客户端。接收设备可以通过数据帧的处理方法将N个数据帧映射为多个业务帧。图16为本申请实施例提供的数据帧的处理方法的第二个流程示意图。如图16所示,数据帧的处理方法包括以下步骤。
在步骤1601中,接收设备获取N个数据帧。N个数据帧的净荷区包括M组PB。M和N为大于0的整数。每组PB包括R×C个PB。R和C为大于1的整数。每个PB的大小为S1字节。N个OPU帧中的每一个净荷区被PB占用的大小为S2字节。M×R×C×S1=N×S2。每组PB包括C1组共享PB。C1为小于或等于C的正整数。每组共享PB包括R1个共享PB。R1为小于或等于R的正整数。R1个共享PB中的每个共享PB包括多个业务的多个数据。多个数据的数量大于1。多个数据的数量小于或等于R。关于N个数据帧和M组PB的描述,可以参考前述图3至图14中的描述。为了方便描述,后续将以图5至图7中P帧为例,对数据帧的处理方法进行描述。
在步骤1602中,接收设备将N个数据帧中的净荷区的M组PB映射到多个业务帧中;
多个业务帧可以为多个OSU帧。在图6中,PB和OSU帧的大小为192字节。在图6中,P帧包括6组独立PB。6组独立PB中的每组独立PB携带同一业务的数据。每组PB中的每个独立PB对应一个OSU帧。因此,当OTN设备将6组独立PB分配给6个独立的业务时,接收设备可以通过一个P帧中的独立PB得到6个业务的6组OSU帧。6个业务和6组OSU帧一一对应。每个业务对应12个OSU帧。P帧包括4组共享PB。4组共享PB中的每组共享PB携带12个业务的12个OSU帧。12个业务和12个OSU帧一一对应。每个OSU帧的数据被均匀的分配在12个共享PB中。一个共享PB包括12个子PB。每个子PB携带一个数据。一个数据的大小为16字节。因此,通过一组共享PB,接收设备可以得到12×12个数据。接收设备对12×12个数据进行组合,得到12个业务的12个OSU帧。通过4组共享PB,接收设备可以得到48个业务的48个OSU帧。
因此,通过一个P帧,接收设备可以得到6个独立业务的72个OSU帧。接收设备还可以得到48个共享业务的48个OSU帧。一个目标周期包括M个P帧。因此,通M个P帧, 接收设备可以得到6个独立业务的72×M个OSU帧。每个独立业务包括12×M个OSU帧。接收设备还可以得到48个共享业务的48×M个OSU帧。每个共享业务包括M个OSU帧。
前面对本申请实施例中提供的数据帧的处理方法进行描述,下面对本申请实施例中提供数据帧的处理设备进行描述。
图17为本申请实施例提供数据帧的处理设备的结构示意图。如图17所示,数据帧的处理设备1700包括处理器1701和收发器1702。数据帧的处理设备1700可以为前述OTN设备和接收设备。当数据帧的处理设备1700为OTN设备时,处理器1701用于执行图4中方法步骤,得到N个数据帧。具体地,处理设备1700可以通过处理器1701中的硬件的集成逻辑电路或软件形式的指令完成上述图4中的方法步骤。处理器1701用于向收发器1702发送N个数据帧。收发器1702用于向接收设备发送N个数据帧。当数据帧的处理设备1700接收设备时,收发器1702用于从OTN设备接收N个数据帧。处理器1701用于执行图16中方法步骤,得到多个业务帧。具体地,处理设备1700可以通过处理器1701中的硬件的集成逻辑电路或软件形式的指令完成上述图16中的方法步骤。
在其它实施例中,处理设备1700还可以包括存储器1703。存储器1703可以是非易失性存储器,比如硬盘(hard disk drive,HDD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器1703是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
存储器1703可以用于存储N个数据帧或多个业务帧。存储器1703也可以用于存储指令,以使得处理1701可以用于执行如上述图4或图15中提及的步骤。或者,存储1703也可以用于存储其他指令,以配置处理器1701的参数以实现对应的功能。
应理解,处理器1701和存储器1703在图2所述的网络设备硬件结构图中,可能位于支路板中;也可能位于支路和线路合一的单板中。或者,网络设备包括多个处理器1701和多个存储器1703。多个处理器1701位于支路板。多个存储器1703位于线路板。支路板和线路板配合完成前述的方法步骤。
应理解,图17所述的装置也可以用于执行前述提及的附图所示的实施例变形或者可选方案中所提及的方法步骤,在此不再赘述。
本申请实施例中处理器1701可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件单元组合执行完成。
处理器1701用于实现上述方法所执行的程序代码可以存储在存储器1703中。存储器1703和处理器1701耦合。本申请实施例中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。处理器1701可能和存储器1703协同操作。
基于以上实施例,本申请实施例还提供了一种计算机可读存储介质。该存储介质中存储软件程序,该软件程序在被一个或多个处理器读取并执行时可实现上述任意一个或多个实施例提供的方法。所述计算机可读存储介质可以包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
基于以上实施例,本申请实施例还提供了一种芯片。该芯片包括处理器,用于实现上述 任意一个或多个实施例所涉及的功能,例如获取或处理上述方法中所涉及的业务帧或数据帧。可选地,所述芯片还包括存储器,用于处理器所执行必要的程序指令和数据。该芯片,可以由芯片构成,也可以包含芯片和其他分立器件。
本申请是参照根据本申请实施例的方法、设备(系统)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请实施例的范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (25)

  1. 一种数据帧的处理方法,其特征在于,包括:
    获取多个业务数据;
    将所述多个业务数据分别映射到多个业务帧中;
    将所述多个业务帧映射到N个数据帧的净荷区的M组净荷块PB,其中,M和N为大于0的整数,每组PB包括R×C个PB,R和C为大于1的整数,其中,每个PB的大小为S1字节,所述N个数据帧中的每一个净荷区被PB占用的大小为S2字节,M×R×C×S1=N×S2;每组PB包括C1组共享PB,C1为小于或等于所述C的正整数,每组共享PB包括R1个共享PB,R1为小于或等于所述R的正整数,所述R1个共享PB中的每个共享PB用于承载多个业务的数据,所述多个业务的数量小于或等于R。
  2. 根据权利要求1所述的方法,其特征在于,
    每组PB还包括C2组独享PB,所述C1与所述C2的和等于所述C,每组独享PB包括R个独享PB,所述R个独享PB属于同一业务的数据。
  3. 根据权利要求1所述的方法,其特征在于,所述M组PB包括P1组PB和P2组PB,P1和P2为大于0的整数,所述P1与所述P2的和等于所述M;
    所述P1组PB中的每组PB包括C3组开销PB、C4组独享PB和所述C1组共享PB,所述C4组独享PB中的每组独享PB包括R个独享PB,每组独享PB用于承载属于同一业务的数据,所述C3组开销PB中的每组开销PB包括R个开销PB,所述R个开销PB用于承载所述多个业务数据的相关信息,C3和C4为大于0的整数,所述C3、所述C4与所述C1的和等于所述C;
    所述P2组PB中的每组PB还包括C2组独享PB,所述C1与所述C2的和等于所述C,每组独享PB包括R个独享PB。
  4. 根据权利要求3所述的方法,其特征在于,T=(P1×C3)÷(M×C),其中,所述T的取值范围在0.001至0.1之间。
  5. 根据权利要求1至4中任意一项所述的方法,其特征在于,
    所述多个数据中的每个数据的传输速率小于11兆比特每秒Mbps;和/或,
    所述R个独享PB中的每个独享PB的传输速率大于100Mbps。
  6. 根据权利要求1至5中任意一项所述的方法,其特征在于,K=M×R×C×S1,K1=R×C×S1,所述K为K1和S2的最小公倍数。
  7. 根据权利要求1至6中任意一项所述的方法,其特征在于,所述N个数据帧包括共享标识,所述共享标识用于标记所述C1组共享PB。
  8. 根据权利要求1至7中任意一项所述的方法,其特征在于,所述N个数据帧为光传送网OTN帧,所述S2为4×3808,所述R1等于所述R,所述R为12,所述C为10。
  9. 根据权利要求8所述的方法,其特征在于,每个PB的大小S1为192,所述M为119,所述N为180。
  10. 根据权利要求1至7中任意一项所述的方法,其特征在于,所述R为17的整数倍,所述C为7。
  11. 根据权利要求10所述的方法,其特征在于,所述R为17,每个PB的大小S1为192,所述M为2,所述N为3。
  12. 根据权利要求10所述的方法,其特征在于,所述R为34,每个PB的大小S1为192, 所述M为1,所述N为3。
  13. 根据权利要求10所述的方法,其特征在于,所述R为68,每个PB的大小S1为192,所述M为1,所述N为6。
  14. 根据权利要求1至7中任意一项所述的方法,其特征在于,所述N个数据帧为OTN帧,每个净荷区的大小为4×3808字节,S2为15168,每个净荷区还包括64字节的开销字段。
  15. 根据权利要求14所述的方法,其特征在于,所述R1等于所述R,所述R为12,所述C为10。
  16. 根据权利要求15所述的方法,其特征在于,每个PB的大小S1为192,所述M为79,所述N为120。
  17. 根据权利要求14至16中任意一项所述的方法,其特征在于,所述开销字段包括所述多个业务的标识。
  18. 根据权利要求1至7中任意一项所述的方法,其特征在于,S=S1÷R,所述S等于8。
  19. 根据权利要求18所述的方法,其特征在于,所述S2为4×3808,所述S1等于192,所述R1等于所述R,所述R为24,所述C为12。
  20. 根据权利要求18所述的方法,其特征在于,所述S2为4×3808,所述S1等于192,所述R1等于所述R,所述R为24,所述C为10。
  21. 根据权利要求18所述的方法,其特征在于,所述S2为4×3808,所述S1等于240,所述R1等于所述R,所述R为30,所述C为12。
  22. 根据权利要求1至7中任意一项所述的方法,其特征在于,所述R1和所述R的差值为X,每组共享PB还包括X个开销PB,X为大于0的整数。
  23. 根据权利要求1至22中任意一项所述的方法,其特征在于,所述R1个共享PB中的每个共享PB还包括管理字段,所述管理字段用于操作、管理和维护OAM。
  24. 根据权利要求23所述的方法,其特征在于,在每个共享PB中,所述管理字段的大小和所述多个业务的数据中的每个数据的大小相同。
  25. 一种数据帧的处理装置,其特征在于,包括处理器和收发器,其中:
    所述处理器用于执行前述权利要求1至24中任意一项所述的方法,以得到N个数据帧;
    所述收发器用于发送所述N个数据帧。
PCT/CN2023/073972 2022-02-11 2023-01-31 数据帧的处理方法和装置 WO2023151483A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202210130541.0 2022-02-11
CN202210130541 2022-02-11
CN202210429110.4A CN116633482A (zh) 2022-02-11 2022-04-22 数据帧的处理方法和装置
CN202210429110.4 2022-04-22

Publications (1)

Publication Number Publication Date
WO2023151483A1 true WO2023151483A1 (zh) 2023-08-17

Family

ID=87563571

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/073972 WO2023151483A1 (zh) 2022-02-11 2023-01-31 数据帧的处理方法和装置

Country Status (1)

Country Link
WO (1) WO2023151483A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8982910B1 (en) * 2011-04-05 2015-03-17 Cisco Technology, Inc. Fixed generic mapping of client data to optical transport frame
CN109478941A (zh) * 2016-07-22 2019-03-15 华为技术有限公司 一种多路业务传送、接收方法及装置
CN111865887A (zh) * 2019-04-30 2020-10-30 华为技术有限公司 光传送网中的数据传输方法及装置
CN112311510A (zh) * 2019-07-26 2021-02-02 华为技术有限公司 业务数据传输的方法和通信装置
CN112584259A (zh) * 2019-09-30 2021-03-30 华为技术有限公司 一种光传送网中的业务处理的方法、装置和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8982910B1 (en) * 2011-04-05 2015-03-17 Cisco Technology, Inc. Fixed generic mapping of client data to optical transport frame
CN109478941A (zh) * 2016-07-22 2019-03-15 华为技术有限公司 一种多路业务传送、接收方法及装置
CN111865887A (zh) * 2019-04-30 2020-10-30 华为技术有限公司 光传送网中的数据传输方法及装置
CN112311510A (zh) * 2019-07-26 2021-02-02 华为技术有限公司 业务数据传输的方法和通信装置
CN112584259A (zh) * 2019-09-30 2021-03-30 华为技术有限公司 一种光传送网中的业务处理的方法、装置和系统

Similar Documents

Publication Publication Date Title
US11234055B2 (en) Service data processing method and apparatus
US11233571B2 (en) Method for processing low-rate service data in optical transport network, apparatus, and system
US11967992B2 (en) Data transmission method and apparatus in optical transport network
US10608766B2 (en) Multi-service transport and receiving method and apparatus
CN101291179B (zh) 一种光传送网中客户信号传送方法及相关设备
US20200235905A1 (en) Data Transmission Method In Optical Network And Optical Network Device
US9497064B2 (en) Method and apparatus for transporting ultra-high-speed Ethernet service
CN102870434B (zh) 传送、接收客户信号的方法和装置
WO2016084893A1 (ja) 光伝送システム及びリソース最適化方法
CN101695144B (zh) 一种支持多业务接入和传输的方法及系统
CN1773898A (zh) 在光传输网络上传输客户层信号的方法及设备
WO2006015533A1 (fr) Procede et dispositif pour transport de signal
WO2020034964A1 (zh) 客户业务数据传送方法、装置、光传送网设备及存储介质
WO2021139604A1 (zh) 光信号传送方法和相关装置
EP1971050A1 (en) An optical transport node construction device and service dispatch method
US11750314B2 (en) Service data processing method and apparatus
US8718069B2 (en) Transmission apparatus and signal transmission method for mapping packets in frames of synchronous network
WO2023134508A1 (zh) 一种光传送网中的业务处理的方法、装置和系统
WO2023151483A1 (zh) 数据帧的处理方法和装置
CN102098595B (zh) 一种光传送网中客户信号传送方法及相关设备
US7558260B2 (en) Byte-timeslot-synchronous, dynamically switched multi-source-node data transport bus system
CN101350691B (zh) 一种业务汇聚和adm分插复用方法及设备
WO2020051851A1 (zh) 光传送网中的数据传输方法及装置
CN116633482A (zh) 数据帧的处理方法和装置
WO2024051586A1 (zh) 一种光传送网中的数据帧的处理方法、装置和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23752263

Country of ref document: EP

Kind code of ref document: A1